prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I know it's possible to access the static views of the api, but I can't find out the basic auth details that I need to login via the browser. Where can I find these? I'm on GCE and created a cluster.</p>
<p>Run <code>kubectl config view</code>. It'll dump out the auth information used to access your cluster, including the basic auth username and password.</p>
<p>Some time ago I asked about handling thousands of services in a Kubernetes cluster:</p> <p><a href="https://stackoverflow.com/questions/27638334/can-kubernetes-handle-thousands-of-services">Can Kubernetes handle thousands of services?</a></p> <p>At that time Kubernetes was using env vars and my question was more oriented to that. Now that Kubernetes has a DNS sounds like we don't have the problem with env vars anymore, however the docs still says it won't perform well when handling thousands of services:</p> <p><a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md#shortcomings" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md#shortcomings</a></p> <p>Wanted to know if documentation is outdated or if there are still issues to scale Kubernetes to thousands of services.</p>
<p>The shortcoming mentioned in the documentation has not changed, because Kubernetes still uses the same mechanism (iptables and a userspace proxy) for proxying traffic sent to a service IP to the pods backing the service.</p> <p>However, I don't believe we actually know how bad it is. A team member briefly tried testing it early this year and <a href="https://github.com/GoogleCloudPlatform/kubernetes/issues/3345#issuecomment-69290242" rel="nofollow">didn't see any impact</a>, but didn't do anything rigorous to verify. It's possible that it'll work fine at a couple thousand services. If you try it, we'd love to hear how it goes <a href="https://github.com/GoogleCloudPlatform/kubernetes#community-discussion-and-support" rel="nofollow">via IRC or email</a>.</p>
<p>What does "level-based" and "edge-based" mean in general?</p> <p>I read "In other words, the system's behavior is level-based rather than edge-based" from kubernetes documentation: <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/api-conventions.md" rel="noreferrer">https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/api-conventions.md</a></p> <p>with Google, I only find: <a href="http://www.keil.com/forum/9423/edge-based-vs-level-based-interrupt/" rel="noreferrer">http://www.keil.com/forum/9423/edge-based-vs-level-based-interrupt/</a></p> <p>Thank you.</p>
<p>It also has a more general definition (at least the way we tend to use it in the documentation). A piece of logic is "level based" if it only depends on the current state. A piece of logic is "edge-based" if it depends on history/transitions in addition to the current state.</p> <p>"Level based" components are more resilient because if they crash, they can come back up and just look at the current state. "Edge-based" components must store the history they rely on (or depend on some other component that stores it), so that when they come back up they can look at the current state and the history. Also, if there is some kind of temporary network partition and an edge-based component misses some of the updates, then it will compute the wrong output.</p> <p>However, "level based" components are usually less efficient, because they may need to scan a lot of state in order to compute an output, rather than just reading deltas.</p> <p>Many components are a mixture of the two.</p> <p>Simple example: You want to build a component that reports the number of pods in READY state. A level-based implementation would fetch all the pods from etcd (or the API server) and count. An edge-based implementation would do that once at startup, and then just watch for pods entering and exiting READY state.</p>
<p>I'm using Google Container Engine with a cluster using <em>Kubernetes 0.20.2</em>.</p> <p>In this cluster, I have <em>1 replication controller (2 replicas)</em> and <em>1 service</em> with a spec type defined to <strong><em>LoadBalancer</em></strong> (basic setup).</p> <p>Everything is working fine here; Then I want to roll update to a different image using the <em>kubectl</em> command:</p> <pre><code>kubectl rolling-update my-rc \ --image=gcr.io/project/gcloudId:my-image-updated \ --update-period=0m </code></pre> <p>About what I understood, running this command should take care of having a zero-downtime. Unfortunately, I have been doing some test using the curl command in a loop, and I still have a downtime of a few seconds. Any ideas why this is happening?</p>
<p>The <code>--update-period</code> flag tells Kubernetes how long to wait between each pod that it's rolling an update to. With the update period set to 0, Kubernetes will update all pods at once, causing a short period of unavailability while the new pods start up. You should set <code>--update-period</code> to be at least as long as it takes each of your pods to initialize. The default value (1 minute) should be fine for almost all cases if you don't want to have to think about it.</p>
<p>I've just installed a kubernetes testinstallation directly on my fedora laptop using <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/v0.18.2/docs/getting-started-guides/fedora/fedora_manual_config.md" rel="nofollow">this guide</a>.</p> <p>After starting kube2sky I've noticed I can't connect to the kubernetes api since the certificates are required. kubernetes-ro is deprecated and no longer available on my machine, so I get the following errors:</p> <p>E0627 15:58:07.145720 1 reflector.go:133] Failed to list *api.Service: Get <a href="https://10.254.0.1:443/api/v1beta3/services" rel="nofollow">https://10.254.0.1:443/api/v1beta3/services</a>: x509: failed to load system roots and no roots provided E0627 15:58:07.146844 1 reflector.go:133] Failed to list *api.Endpoints: Get <a href="https://10.254.0.1:443/api/v1beta3/endpoints" rel="nofollow">https://10.254.0.1:443/api/v1beta3/endpoints</a>: x509: failed to load system roots and no roots provided</p> <p>How can I setup the certificates?</p>
<p>This has been a common problem for folks that aren't running on setups that use salt to automatically configure system secrets on the master node (as GCE does). This has been fixed at head and should be fixed in the next release. </p> <p>In the mean time, you can manually create a secret for the DNS service that contains a kubeconfig file for kube2sky to connect to the master. You can see how this is done on GCE by looking at the create-kubeconfig-secret function in <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/saltbase/salt/kube-addons/kube-addons.sh#L24" rel="nofollow">kube-addons.sh</a> (when called with the username "system:dns"). The name of the resulting secret should be token-system-dns. </p>
<p>If I have a multi - tier application (say web / logic / database), where each tier having it's own container, and I need to deploy all of these en - bloc, do they all have to go into the same pod?</p> <p>And if they are in the same pod, does this have any implications in terms of the maximum size of application that can be run?</p> <p>Or is there some higher level abstraction that I can use to start all three layers, but have them running on different minions?</p>
<p>Why do you need to deploy all of the components together? In a micro services architecture, you would want to reduce the dependencies between each layer to a clean interface and then allow each layer to be deployed and scaled separately from the others. </p> <p>If you need to deploy them together (e.g. they share local disk or localhost networking) then you need to deploy them as a single pod. A single pod is an atomic scheduling unit, so it will be deployed onto a single host machine. Since it lands on a single host, this limits the scalability of your application to the size of a single host (not allowing you to scale out as your traffic increases). </p> <p>If your three layers are not tightly coupled, then you can run them in different pods, which allows them to be scheduled across multiple hosts (or on the same host if, for example, you are doing local development). To connect the pods together, you can define <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md" rel="nofollow">services</a>. </p> <p>You should take a look at the <a href="https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/guestbook" rel="nofollow">guestbook example</a> which illustrates how to define pods and services for a simple multi-tier web application running on Kubernetes. </p>
<p>We would like to spin up pods quickly on our cluster, to handle 'one-off' tasks (the idea being that each task has a new pod every time it runs).</p> <p>Currently, it takes about 10-15 seconds from a Pod creation API call -> completion. This is running on 3x m3 xlarge on AWS, with images that have already been cached (I presume, as I am using the same image twice on a single Node). We are running with restartPolicy = Never, as they are one off tasks. </p> <p>I've tried fiddling with the imagePullPolicy (= Never) and resource options with no avail. It appears that the 10 second delay happens in the 'Running' phase, after Kubernetes has handed it off to a Pod. I can confirm the operation itself is very quick: running locally on Docker only takes about 0.5s total, including the operation.</p> <p>Is there any way to speed this up?</p>
<p>Our target is 5s latency from creation -> Running (assuming image is pre-pulled). The issue tracking this was <a href="https://github.com/GoogleCloudPlatform/kubernetes/issues/3954" rel="nofollow">https://github.com/GoogleCloudPlatform/kubernetes/issues/3954</a>.</p> <p>This issue was closed a couple weeks ago, so please update to version 20.2 and give it another try.</p>
<p>I want to set the hostname in a container running inside Kubernetes, but Kubernetes appears to be overriding whatever I set at runtime.</p> <p>I've tried both setting /etc/hostname in the docker image I'm having Kubernetes use, and including <code>echo "host.example.com &gt; /etc/hostname"</code> in the CMD in the Dockerfile.</p> <p>There appears to be a docker flag <code>-h</code> to set the hostname. Is there a way for me to specify in my replication controller that it should start the container with a special flag?</p> <p>The container's Debian, if it helps.</p>
<p>My previous answer was incorrect, edited with correct info</p> <p>The <code>-h</code> flag for <code>docker run</code> will set the hostname of the container when you create it.</p> <p>Test it out: <code>docker run -h test.example.com -it ubuntu /bin/bash</code></p> <p>The <code>docker start</code> command does not have the same <code>-h</code> or <code>--hostname</code> argument though. It doesn't seem possible to change the hostname of an existing container, just a new one from an image.</p> <p>However w/r/t Kubernetes: There is an open <a href="https://github.com/GoogleCloudPlatform/kubernetes/issues/4825" rel="noreferrer">issue on Github</a> regarding how Kubernetes handles hostnames. It does not seem like Kubernetes exposes docker's hostname setting directly, but you might be able to influence it via your pod name</p>
<p>We want to test kubernetes load balancing. So we create a 2 node cluster thats runs 6 replicas of our container. Container has running apache2 server and php and it will print pod name if we browse hostname.php</p> <p><strong>Cluster details:</strong> 172.16.2.92 -- master and minion 172.16.2.91 -- minion</p> <p><strong>RC and service details:</strong></p> <p>frontend-controller.json:</p> <pre><code>{ "kind":"ReplicationController", "apiVersion":"v1beta3", "metadata":{ "name":"frontend", "labels":{ "name":"frontend" } }, "spec":{ "replicas":6, "selector":{ "name":"frontend" }, "template":{ "metadata":{ "labels":{ "name":"frontend" } }, "spec":{ "containers":[ { "name":"php-hostname", "image":"naresht/hostname", "ports":[ { "containerPort":80, "protocol":"TCP" } ] } ] } } } } </code></pre> <p>frontend-service.json:</p> <pre><code>{ "kind":"Service", "apiVersion":"v1beta3", "metadata":{ "name":"frontend", "labels":{ "name":"frontend" } }, "spec":{ "createExternalLoadBalancer": true, "ports": [ { "port":3000, "targetPort":80, "protocol":"TCP" } ], "publicIPs": [ "172.16.2.92"], "selector":{ "name":"frontend" } } } </code></pre> <p><strong>Pod details:</strong> frontend-01bb8, frontend-svxfl and frontend-yki5s are running on node 172.16.2.91 frontend-65ykz , frontend-c1x0d and frontend-y925t are running on node 172.16.2.92</p> <p>If we browse for 172.16.2.92:3000/hostname.php, it prints POD name.</p> <p><strong>Problem:</strong></p> <p>Running watch -n1 curl 172.16.2.92:3000/hostname.php on node 172.16.2.92 gives only that pods(frontend-65ykz , frontend-c1x0d and frontend-y925t ). They are not showing other node 172.16.2.91 pods. Running same command on node 172.16.2.91 gives only that pods. They are not showing other node 172.16.2.92 pods. Running same command outside of cluster showing only 172.16.2.92 pods. But we want to see all pods not specific node pods, if we run wherever.</p> <p>Check below details for more information and help you if anything wrong</p> <p># kubectl get nodes</p> <pre><code>NAME LABELS STATUS 172.16.2.91 kubernetes.io/hostname=172.16.2.91 Ready 172.16.2.92 kubernetes.io/hostname=172.16.2.92 Ready </code></pre> <p># kubectl get pods</p> <pre><code>POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE frontend-01bb8 172.17.0.84 172.16.2.91/172.16.2.91 name=frontend Running About a minute php-hostname naresht/hostname Running About a minute frontend-65ykz 10.1.64.79 172.16.2.92/172.16.2.92 name=frontend Running About a minute php-hostname naresht/hostname Running About a minute frontend-c1x0d 10.1.64.77 172.16.2.92/172.16.2.92 name=frontend Running About a minute php-hostname naresht/hostname Running About a minute frontend-svxfl 172.17.0.82 172.16.2.91/172.16.2.91 name=frontend Running About a minute php-hostname naresht/hostname Running About a minute frontend-y925t 10.1.64.78 172.16.2.92/172.16.2.92 name=frontend Running About a minute php-hostname naresht/hostname Running About a minute frontend-yki5s 172.17.0.83 172.16.2.91/172.16.2.91 name=frontend Running About a minute php-hostname naresht/hostname Running About a minute kube-dns-sbgma 10.1.64.11 172.16.2.92/172.16.2.92 k8s-app=kube-dns,kubernetes.io/cluster-service=true,name=kube-dns Running 45 hours kube2sky gcr.io/google_containers/kube2sky:1.1 Running 45 hours etcd quay.io/coreos/etcd:v2.0.3 Running 45 hours skydns gcr.io/google_containers/skydns:2015-03-11-001 Running 45 hours </code></pre> <p># kubectl get services</p> <pre><code>NAME LABELS SELECTOR IP(S) PORT(S) frontend name=frontend name=frontend 192.168.3.184 3000/TCP kube-dns k8s-app=kube-dns,kubernetes.io/cluster-service=true,name=kube-dns k8s-app=kube-dns 192.168.3.10 53/UDP kubernetes component=apiserver,provider=kubernetes &lt;none&gt; 192.168.3.2 443/TCP kubernetes-ro component=apiserver,provider=kubernetes &lt;none&gt; 192.168.3.1 80/TCP </code></pre> <p># iptables -t nat -L</p> <pre><code>Chain KUBE-PORTALS-CONTAINER (1 references) target prot opt source destination REDIRECT tcp -- anywhere 192.168.3.184 /* default/frontend: */ tcp dpt:3000 redir ports 50734 REDIRECT tcp -- anywhere kube02 /* default/frontend: */ tcp dpt:3000 redir ports 50734 REDIRECT udp -- anywhere 192.168.3.10 /* default/kube-dns: */ udp dpt:domain redir ports 52415 REDIRECT tcp -- anywhere 192.168.3.2 /* default/kubernetes: */ tcp dpt:https redir ports 33373 REDIRECT tcp -- anywhere 192.168.3.1 /* default/kubernetes-ro: */ tcp dpt:http redir ports 60311 Chain KUBE-PORTALS-HOST (1 references) target prot opt source destination DNAT tcp -- anywhere 192.168.3.184 /* default/frontend: */ tcp dpt:3000 to:172.16.2.92:50734 DNAT tcp -- anywhere kube02 /* default/frontend: */ tcp dpt:3000 to:172.16.2.92:50734 DNAT udp -- anywhere 192.168.3.10 /* default/kube-dns: */ udp dpt:domain to:172.16.2.92:52415 DNAT tcp -- anywhere 192.168.3.2 /* default/kubernetes: */ tcp dpt:https to:172.16.2.92:33373 DNAT tcp -- anywhere 192.168.3.1 /* default/kubernetes-ro: */ tcp dpt:http to:172.16.2.92:60311 </code></pre> <p>Thanks</p>
<p>Because flannel is not working properly so do </p> <blockquote> <p>/root/kube/reconfDocker.sh on everynode</p> </blockquote> <p>it will restart the docker and flannel then check the ifconfig docker0 and flannel0 bridge IPs should be in same network. Then load balancing will work. It works for me.</p>
<p>I'm trying to understand a good way to handle Kubernetes cluster where there are several nodes and a master. I host the cluster within the cloud of my company, plain Ubuntu boxes (so no Google Cloud or AWS).</p> <p>Each pod contains the webapp (which is stateless) and I run any number of pods via replication controllers.</p> <p>I see that with Services, I can declare PublicIPs however this is confusing because after adding ip addresses of my minion nodes, each ip only exposes the pod that it runs and it doesn't do any sort of load balancing. Due to this, if a node doesn't have any active pod running (as created pods are random allocated among nodes), it simply timeouts and I end up some IP addresses that don't response. Am I understanding this wrong?</p> <p>How can I truly do a proper external load balancing for my web app? Should I do load balancing on Pod level instead of using Service? If so, pods are considered mortal and they may dynamically die and born, how I do track of this? </p>
<p>The PublicIP thing is changing lately and I don't know exactly where it landed. But, services are <em>the</em> ip address and port that you reference in your applications. In other words, if I create a database, I create it as a pod (with or without a replication controller). I don't connect to the pod, however, from another application. I connect to a service which knows about the pod (via a label selector). This is important for a number of reasons.</p> <ol> <li>If the database fails and is recreated on a different host, the application accessing it still references the (stationary) service ip address, and the kubernetes proxies take care of getting the request to the correct pod.</li> <li>The service address is known by all Kubernetes nodes. Any node can proxy the request appropriately.</li> </ol> <p>I think a variation of the theme applies to your problem. You might consider creating an external load balancer which forwards traffic to all of your nodes for the specific (web) service. You still need to take the node out of the balancer's targets if the node goes down, but, I think that any node will forward the traffic for any service whether or not that service is on that node.</p> <p>All that said, I haven't had direct experience with external (public) ip addresses load balancing to the cluster, so there are probably better techniques. The main point I was trying to make is the node will proxy the request to the appropriate pod whether or not that node has a pod.</p> <p>-g</p>
<p>A few kubernetes novice questions. </p> <p>If I got it right when a kubernetes cluster its setup the size ita defined with the number of minions that want to be in the cluster, lets say I create a cluster with two minions. If I decide to deploy 4 pods with a php and nginx serving it. </p> <p>There its a way I can choose the amount of resources I want to have to each pod?</p> <p>In old deployments we deploy directly to servers/vm we know the amount of resources of each server/vm. Suppose I have a no functional requirement of 2gb ram 4 cps 160gb hdd. </p> <p>How can I do that using kubernetes. </p> <p>Now suppose I have those 4 pods deployed I want to scale up and the new pods need to fulfill the same no functional requirements. </p> <p>Do I need to resize my cluster or there ita a way kubernetes doit for me?</p> <p>Thanks. </p>
<p>See the <a href="http://kubernetes.io/v1.0/docs/user-guide/compute-resources.html" rel="noreferrer">Compute Resources</a> section of the kubernetes user guide. It describes how to assign cpu and memory limits on your containers and how the scheduler places them in your cluster. </p> <p>As you scale up the number of pods you are running, the scheduler will attempt to place them in the available space. If there is no way that the pods can be scheduled, then the pods will stay in a pending state until the scheduler can find a place to run them. You may be able to relax some constraints you placed on your pods (host ports, label selectors, etc) or you may need to increase the compute capacity of your cluster by adding additional nodes.</p> <p>Right now, the cluster will not automatically add new nodes when it is out of capacity. Work to add this functionality, at least for GCE, is now underway (see #<a href="https://github.com/GoogleCloudPlatform/kubernetes/issues/11748" rel="noreferrer">11748</a>) but does not exist in v1.0 of Kubernetes. Until that feature is implemented, you will need to manually scale your cluster. If you are running on GCE / GKE, this can be accomplished by resizing the managed instance group that contains the nodes for your cluster. On other cloud providers, you need to clone the node configuration onto a new node so that is has the proper credentials to join the cluster. </p>
<p>I have followed the installation steps: <a href="https://cloud.google.com/container-engine/docs/tutorials/guestbook#install_gcloud_and_kubectl" rel="noreferrer">https://cloud.google.com/container-engine/docs/tutorials/guestbook#install_gcloud_and_kubectl</a></p> <p>A Google Container Engine cluster is up and running and gcloud CLI is authenticated and works.</p> <p>But kubectl says: <code>"couldn't read version from server: Get http://local host:8080/api: dial tcp 127.0.0.1:8080: connection refused"</code></p> <p>I think I need to use <code>kubectl config set-cluster</code> to setup the connection to my cluster on GCE.</p> <p>Where do I find the address of the Kubernetes master of my GCE cluster? With <code>gcloud beta container clusters list</code> I seemingly get the master IP of my cluster. I used that with <code>kubectl config set-cluster</code>.</p> <p>Now it says: <code>"error: couldn't read version from server: Get http:// 104.197.49.119/api: dial tcp 104.197.49.119:80: i/o timeout"</code></p> <p>Am I on the right track with this?</p> <p>Additional strangeness:</p> <ul> <li><p><code>gcloud container</code> or <code>gcloud preview container</code> doesn't work for me. Only <code>gcloud beta container</code> </p></li> <li><p>MASTER_VERSION of my cluster is 0.21.4, while the version of my kubectl client is GitVersion:"v0.20.2", even though freshly installed with gcloud.</p></li> </ul>
<p>Run</p> <p><code>gcloud container clusters get-credentials my-cluster-name</code></p> <p>to update the kubeconfig file and point kubectl at a cluster on Google Container Engine.</p>
<p>Assume the following stack:</p> <ul> <li>A dedicated server</li> <li>The server is running Vagrant</li> <li>Vagrant is running 2 virtual machines master + minion-1 (Kubernetes)</li> <li>minion-1 is running a pod</li> <li>Within the pod is 2 containers: webservice and fileservice</li> </ul> <p>Both webservice and fileservice should be accessible from internet i.e. from outside. Either by web.mydomain.com - file.mydomain.com or www.mydomain.com/web/ - www.mydomain.com/file/</p> <p>Before using Kubernetes, I was using a remote proxy (HAproxy) and simply mapped domain names to an internal ip / port.</p> <p>Now with Kubernetes, I can imagine there is something dedicated to this task but I honestly have no clue from where to start.</p> <p>I read about "createExternalLoadBalancer", kubernetes Services and kube-proxy. Should a reverse-proxy still be put somewhere (before vagrant or within a pod ?) also is using Vagrant a good option for production (staying in the scope of this question) ?</p>
<p>The easiest thing for you to do at the moment is to make a service of type "nodePort", and to configure your HAproxy to point at minion-1:.</p> <p>createExternalLoadBalancer is the old, less flexible, way to do this--it requires the cloud provider to do work. Type=nodePort doesn't require anything special from the cloud provider.</p>
<p>When I run the Kubernetes vagrant setup script:</p> <pre><code>export KUBERNETES_PROVIDER=vagrant curl -sS https://get.k8s.io | bash </code></pre> <p>I get:</p> <pre><code>Validating master Validating minion-1 Waiting for each minion to be registered with cloud provider error: couldn't read version from server: Get https://10.245.1.2/api:dial tcp 10.245.1.2:443: connection refused </code></pre> <p>Anyone know how I can fix this?</p>
<p>It looks like issue is gone, I've tried one more time and installation went flawlessly: </p> <p><img src="https://i.stack.imgur.com/TLPwG.png" alt="enter image description here"></p>
<p>I'm trying to spin up a Kubernetes (k8s) cluster on GCE. When I run</p> <pre><code>gcloud components update kubectl </code></pre> <p>on a Windows machine I get</p> <blockquote> <p>ERROR: (gcloud.components.update) The following components are unknown [kubectl]</p> </blockquote>
<p>Update (May 2016): As of <a href="https://cloud.google.com/container-engine/release-notes#march_29_2016" rel="nofollow">late March 2016</a>, gcloud will now install kubectl for windows. </p> <hr> <p>gcloud won't currently install kubectl on windows (hence the error). You can download a recent kubectl binary directly from GCS instead (<code>gsutil cp gs://kubernetes-release/release/v1.0.0/bin/windows/amd64/kubectl.exe kubectl.exe</code>). </p> <p>Since kubectl is much less heavily tested on windows than on Mac/Linux, you may need to pass the <code>--kubeconfig</code> argument if it isn't finding your configuration file in the default location (<code>gcloud container clusters get-credentials NAME</code> should write to <code>.kube/config</code> in your home directory and kubectl should read from the same file). </p>
<p>I am trying to use CircleCI to automate my push to the google container registry and initiate a rolling update of the kubernetes pods. I have a <code>circle.yml</code> file with dependencies like this:</p> <pre><code>dependencies: cache_directories: - ~/kubernetes - ./google-cloud-sdk pre: - chmod a+x ./download-gcloud.sh ./install-kubernetes.sh ./deploy.sh - ./download-gcloud.sh - ./google-cloud-sdk/install.sh &lt;--- This step hangs - ./google-cloud-sdk/bin/gcloud components update - ./google-cloud-sdk/bin/gcloud auth activate-service-account $GCLOUD_CLIENT_ID --key-file ./kubernetes/gcloud_pem.p12 - ./install-kubernetes.sh </code></pre> <p>I can pull the tar file down and install it, but I am not sure how to select a prompt, so the build hangs:</p> <pre><code>Welcome to the Google Cloud SDK! To help improve the quality of this product, we collect anonymized data on how the SDK is used. You may choose to opt out of this collection now (by choosing 'N' at the below prompt), or at any time in the future by running the following command: gcloud config set --scope=user disable_usage_reporting true Do you want to help improve the Google Cloud SDK (Y/n)? </code></pre> <p>Is there a flag I can set when running the install script to disable prompts?</p>
<p>The recommended way to install the Cloud SDK non-interactively is to set the environment variable <code>CLOUDSDK_CORE_DISABLE_PROMPTS</code>:</p> <pre><code>export CLOUDSDK_CORE_DISABLE_PROMPTS=1 </code></pre> <p>This will make the Cloud SDK run in non-interactive mode, in which it accepts the default value for all prompts. This applies both to installation and general operation (which will be useful for scripting).</p> <p>To make this permanent, run</p> <pre><code>gcloud config set core/disable_prompts 1 </code></pre> <p>once the Cloud SDK is installed.</p>
<p>I'm trying to update a service using :</p> <pre><code>kubectl update service my-service \ --patch='{ "apiVersion":"v1", "spec": { "selector": { "build":"2"} } }' </code></pre> <p>I receive the following Error : <code>Error from server: service "\"apiVersion\":\"v1\"," not found</code></p> <p>I have tried the following :</p> <ul> <li>moving the service name to the end</li> <li>Removing the apiVersion</li> </ul> <hr> <p>Maybe the kubectl update is not available for service ?</p> <hr> <p>For now I was making my updates by simply stoping and restarting my service. But sometime, the corresponding forwarding-port changes. So it seems to not be the good choice ...</p> <hr> <p>PS:</p> <ul> <li>v0.19</li> <li>api_v1</li> </ul>
<p>I am not sure if patch is 100% working yet, but if you are going to do this, you at least need to put apiVersion inside metadata, like so:</p> <p>--patch='{ metadata:{ "apiVersion":"v1" }, "spec": { "selector": { "build":"2"} } }'</p>
<p>I can get Pods information using <a href="http://localhost:8001/api/v1/pods" rel="nofollow">http://localhost:8001/api/v1/pods</a> from inside my cluster.</p> <p>Is there any way to get pod informations using <a href="http://master-public-ip:8001/api/v1/pods" rel="nofollow">http://master-public-ip:8001/api/v1/pods</a> ?</p>
<p>By default, the master only exposes HTTPS to the public internet, not HTTP. You should be able to hit <code>https://admin:password@master-public-ip/api/v1/pods/</code>, where <code>password</code> is the generated password for the admin user. This can be found either in the <code>.kube/config</code> file on your machine, or in the <code>/srv/kubernetes/known_tokens.csv</code> file on the master.</p> <p>E.g. on the master VM:</p> <pre><code>$ cat /srv/kubernetes/known_tokens.csv mYpASSWORD,admin,admin unused,kubelet,kubelet ... </code></pre> <p>Or on your machine:</p> <pre><code>$ cat ~/.kube/config ... - name: my-cluster user: client-certificate-data: ... client-key-data: ... password: mYpASSWORD username: admin ... $ curl --insecure https://admin:mYpASSWORD@master-public-ip/api/v1/pods/ ... </code></pre> <p>To avoid using <code>--insecure</code> (i.e. actually verify the server certificate that your master is presenting), you can use the <code>--cacert</code> flag to specify the cluster certificate authority from your <code>.kube/config</code> file.</p> <pre><code>$ cat ~/.kube/config ... - cluster: certificate-authority-data: bIgLoNgBaSe64eNcOdEdStRiNg server: https://master-public-ip name: my-cluster ... $ echo bIgLoNgBaSe64eNcOdEdStRiNg | base64 -d &gt; ca.crt $ curl --cacert=ca.crt https://admin:mYpASSWORD@master-public-ip/api/v1/pods/ ... </code></pre>
<p>I deployed apiserver on master node (core01) with following conf:</p> <pre><code>core01&gt; /opt/bin/kube-apiserver \ --insecure_bind_address=127.0.0.1 \ --insecure_port=8080 \ --kubelet_port=10250 \ --etcd_servers=http://core01:2379,http://core02:2379,http://core03:2379 \ --service-cluster-ip-range=10.1.0.0/16 \ --allow_privileged=false \ --logtostderr=true \ --v=5 \ --tls-cert-file="/var/run/kubernetes/apiserver_36kr.pem" \ --tls-private-key-file="/var/run/kubernetes/apiserver_36kr.key" \ --client-ca-file="/var/run/kubernetes/cacert.pem" \ --kubelet-certificate-authority="/var/run/kubernetes/cacert.pem" \ --kubelet-client-certificate="/var/run/kubernetes/kubelet_36kr.pem" \ --kubelet-client-key="/var/run/kubernetes/kubelet_36kr.key" </code></pre> <p>On minion node (core02), I can call api from HTTPS:</p> <pre><code>core02&gt; curl https://core01:6443/api/v1/nodes --cert /var/run/kubernetes/kubelet_36kr.pem --key /var/run/kubernetes/kubelet_36kr.key &gt; GET /api/v1/nodes HTTP/1.1 &gt; Host: core01:6443 &gt; User-Agent: curl/7.42.1 &gt; Accept: */* &gt; &lt; HTTP/1.1 200 OK &lt; Content-Type: application/json &lt; Date: Sat, 27 Jun 2015 15:33:50 GMT &lt; Content-Length: 1577 &lt; { "kind": "NodeList", "apiVersion": "v1", "metadata": { "selfLink": "/api/v1/nodes", "resourceVersion": "510078" }, .... </code></pre> <p>However, I can not start kubelet on this minion. It always complain no credentials. </p> <p>How can I make it work? Is there any doc on master &lt;-> minion communication authentication? Could you please give me the best practice?</p> <hr> <p>FYI, The command is following:</p> <pre><code>core02&gt; /opt/bin/kubelet \ --logtostderr=true \ --v=0 \ --api_servers=https://core01:6443 \ --address=127.0.0.1 \ --port=10250 \ --allow-privileged=false \ --tls-cert-file="/var/run/kubernetes/kubelet_36kr.pem" \ --tls-private-key-file="/var/run/kubernetes/kubelet_36kr.key" </code></pre> <p>kubelet log is following:</p> <pre><code>W0627 23:34:03.646311 3004 server.go:460] Could not load kubeconfig file /var/lib/kubelet/kubeconfig: stat /var/lib/kubelet/kubeconfig: no such file or directory. Trying auth path instead. W0627 23:34:03.646520 3004 server.go:422] Could not load kubernetes auth path /var/lib/kubelet/kubernetes_auth: stat /var/lib/kubelet/kubernetes_auth: no such file or directory. Continuing with defaults. I0627 23:34:03.646710 3004 manager.go:127] cAdvisor running in container: "/system.slice/sshd.service" I0627 23:34:03.647292 3004 fs.go:93] Filesystem partitions: map[/dev/sda9:{mountpoint:/ major:0 minor:30} /dev/sda4:{mountpoint:/usr major:8 minor:4} /dev/sda6:{mountpoint:/usr/share/oem major:8 minor:6}] I0627 23:34:03.648234 3004 manager.go:156] Machine: {NumCores:1 CpuFrequency:2399996 MemoryCapacity:1046294528 MachineID:29f94a4fad8b31668bd219ca511bdeb0 SystemUUID:4F4AF929-8BAD-6631-8BD2-19CA511BDEB0 BootID:fa1bea28-675e-4989-ad86-00797721a794 Filesystems:[{Device:/dev/sda9 Capacity:18987593728} {Device:/dev/sda4 Capacity:1031946240} {Device:/dev/sda6 Capacity:113229824}] DiskMap:map[8:0:{Name:sda Major:8 Minor:0 Size:21474836480 Scheduler:cfq} 8:16:{Name:sdb Major:8 Minor:16 Size:1073741824 Scheduler:cfq}] NetworkDevices:[{Name:eth0 MacAddress:52:54:71:f6:fc:b8 Speed:0 Mtu:1500} {Name:flannel0 MacAddress: Speed:10 Mtu:1472}] Topology:[{Id:0 Memory:1046294528 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[]}]} I0627 23:34:03.649934 3004 manager.go:163] Version: {KernelVersion:4.0.5 ContainerOsVersion:CoreOS 695.2.0 DockerVersion:1.6.2 CadvisorVersion:0.15.1} I0627 23:34:03.651758 3004 plugins.go:69] No cloud provider specified. I0627 23:34:03.651855 3004 docker.go:289] Connecting to docker on unix:///var/run/docker.sock I0627 23:34:03.652877 3004 server.go:659] Watching apiserver E0627 23:34:03.748954 3004 reflector.go:136] Failed to list *api.Pod: the server has asked for the client to provide credentials (get pods) E0627 23:34:03.750157 3004 reflector.go:136] Failed to list *api.Node: the server has asked for the client to provide credentials (get nodes) E0627 23:34:03.751666 3004 reflector.go:136] Failed to list *api.Service: the server has asked for the client to provide credentials (get services) I0627 23:34:03.758158 3004 plugins.go:56] Registering credential provider: .dockercfg I0627 23:34:03.856215 3004 server.go:621] Started kubelet E0627 23:34:03.858346 3004 kubelet.go:662] Image garbage collection failed: unable to find data for container / I0627 23:34:03.869739 3004 kubelet.go:682] Running in container "/kubelet" I0627 23:34:03.869755 3004 server.go:63] Starting to listen on 127.0.0.1:10250 E0627 23:34:03.899877 3004 event.go:185] Server rejected event '&amp;api.Event{TypeMeta:api.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"core02.13eba23275ceda25", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:util.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*util.Time)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Node", Namespace:"", Name:"core02", UID:"core02", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"starting", Message:"Starting kubelet.", Source:api.EventSource{Component:"kubelet", Host:"core02"}, FirstTimestamp:util.Time{Time:time.Time{sec:63571016043, nsec:856189989, loc:(*time.Location)(0x1ba6120)}}, LastTimestamp:util.Time{Time:time.Time{sec:63571016043, nsec:856189989, loc:(*time.Location)(0x1ba6120)}}, Count:1}': 'the server has asked for the client to provide credentials (post events)' (will not retry!) I0627 23:34:04.021297 3004 factory.go:226] System is using systemd I0627 23:34:04.021790 3004 factory.go:234] Registering Docker factory I0627 23:34:04.022241 3004 factory.go:89] Registering Raw factory I0627 23:34:04.144065 3004 manager.go:946] Started watching for new ooms in manager I0627 23:34:04.144655 3004 oomparser.go:183] oomparser using systemd I0627 23:34:04.145379 3004 manager.go:243] Starting recovery of all containers I0627 23:34:04.293020 3004 manager.go:248] Recovery completed I0627 23:34:04.343829 3004 status_manager.go:56] Starting to sync pod status with apiserver I0627 23:34:04.343928 3004 kubelet.go:1683] Starting kubelet main sync loop. E0627 23:34:04.457765 3004 event.go:185] Server rejected event '&amp;api.Event{TypeMeta:api.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"core02.13eba232995c8213", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:util.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*util.Time)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Node", Namespace:"", Name:"core02", UID:"core02", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeReady", Message:"Node core02 status is now: NodeReady", Source:api.EventSource{Component:"kubelet", Host:"core02"}, FirstTimestamp:util.Time{Time:time.Time{sec:63571016044, nsec:452676115, loc:(*time.Location)(0x1ba6120)}}, LastTimestamp:util.Time{Time:time.Time{sec:63571016044, nsec:452676115, loc:(*time.Location)(0x1ba6120)}}, Count:1}': 'the server has asked for the client to provide credentials (post events)' (will not retry!) E0627 23:34:04.659874 3004 event.go:185] Server rejected event '&amp;api.Event{TypeMeta:api.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"core02.13eba232a599cf8c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:util.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*util.Time)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Node", Namespace:"", Name:"core02", UID:"core02", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeReady", Message:"Node core02 status is now: NodeReady", Source:api.EventSource{Component:"kubelet", Host:"core02"}, FirstTimestamp:util.Time{Time:time.Time{sec:63571016044, nsec:658020236, loc:(*time.Location)(0x1ba6120)}}, LastTimestamp:util.Time{Time:time.Time{sec:63571016044, nsec:658020236, loc:(*time.Location)(0x1ba6120)}}, Count:1}': 'the server has asked for the client to provide credentials (post events)' (will not retry!) </code></pre>
<p>The first two lines of the kubelet log file actually point to the underlying problem -- you aren't specifying any client credentials for the kubelet to connect to the master. </p> <p>The <code>--tls-cert-file</code> and <code>--tls-private-key-file</code> arguments for the kubelet are used to configure the http server on the kubelet (if not specified, the kubelet <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cmd/kubelet/app/server.go#L198">will generate a self-signed certificate for its https endpoint</a>). This certificate / key pair are not used as the client certificate presented to the master for authentication. </p> <p>To specify credentials, there are two options: a kubeconfig file and a kubernetes_auth file. The later is deprecated, so I would recommend using a kubeconfig file. </p> <p>Inside the kubeconfig file you need to specify either a bearer token or a client certificate that the kubelet should present to the apiserver. You can also specify the CA certificate for the apiserver (if you want the connection to be secure) or tell the kubelet to skip checking the certificate presented by the apiserver. Since you have certificates for the apiserver, I'd recommend adding the CA certificate to the kubeconfig file.</p> <p>The kubeconfig file should look like:</p> <pre><code>apiVersion: v1 kind: Config users: - name: kubelet user: client-certificate-data: &lt;base64-encoded-cert&gt; client-key-data: &lt;base64-encoded-key&gt; clusters: - name: local cluster: certificate-authority-data: &lt;base64-encoded-ca-cert&gt; contexts: - context: cluster: local user: kubelet name: service-account-context current-context: service-account-context </code></pre> <p>To generate the base64 encoded client cert, you should be able to run something like <code>cat /var/run/kubernetes/kubelet_36kr.pem | base64</code>. If you don't have the CA certificate handy, you can replace the <code>certificate-authority-data: &lt;base64-encoded-ca-cert&gt;</code> with <code>insecure-skip-tls-verify: true</code>. </p> <p>If you put this file at <code>/var/lib/kubelet/kubeconfig</code> it should get picked up automatically. Otherwise, you can use the <code>--kubeconfig</code> argument to specify a custom location. </p>
<p>As per <a href="https://stackoverflow.com/questions/31573402/error-couldnt-read-version-from-server">this question</a>, I was wondering if there are any plans to be able to use the kubectl tool without installing gcloud?</p> <p>The use case I have in mind is the same as the one that fellow had: <a href="https://github.com/pires/kubernetes-vagrant-coreos-cluster" rel="nofollow noreferrer">running kubernetes on CoreOS locally via vagrant</a>.</p>
<p>You can definitely use kubectl without gcloud. You can download the latest official Kubernetes release from its <a href="https://github.com/GoogleCloudPlatform/kubernetes/releases" rel="nofollow">releases page on Github</a>, then run kubectl by untarring it and either running <code>./cluster/kubectl.sh</code> or copying the kubectl binary from <code>platforms/$OS/$ARCHITECTURE/kubectl</code> into your shell's path.</p> <p>Also, setting up Kubernetes on vagrant can be as simple as running these two commands, which should get everything in place for you:</p> <p><code>export KUBERNETES_PROVIDER=vagrant curl -sS https://get.k8s.io | bash</code></p>
<p>Batch computations, Monte Carlo, using Docker image, multiple jobs running on Google cloud and managed by Kubernetes. But it (replication controller, I guess?) managed to restart same computation again and again due to default restart policy.</p> <p>Is there a way now to let pods die? Or maybe other workarounds to do pods garbage collection?</p>
<p>Now that v1.0 is out, better native support for getting the batch computations is one of the team's top priorities, but it is already quite possible to run them.</p> <p>If you run something as a pod rather than as a replication controller, you can set the <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/release-1.0/pkg/api/v1/types.go#L864" rel="nofollow"><code>restartPolicy</code></a> field on it. The <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/release-1.0/pkg/api/v1/types.go#L841" rel="nofollow"><code>OnFailure</code></a> policy is probably what you'd want, meaning that kubernetes will restart a pod that exited with a non-zero exit code, but won't restart a pod that exited zero.</p> <p>If you're using <code>kubectl run</code> to start your pods, though, I'm unfortunately not aware of a way to have it create just a pod rather than a replication controller. If you'd like something like that, it'd be great if you <a href="https://github.com/GoogleCloudPlatform/kubernetes/issues/new" rel="nofollow">opened an issue</a> requesting it as an option.</p>
<p>I'm getting started with Kubernates and having problems installing Postgres using Kubernetes with a GCE Persistent disk. I can successfully install Mysql using both the Kubernates walkthroughs and also the following guide: <a href="http://amygdala.github.io/kubernetes/2015/01/13/k8s1.html" rel="nofollow">http://amygdala.github.io/kubernetes/2015/01/13/k8s1.html</a></p> <p>However, when I try to achieve a similar thing with postgres, it seems to fail when attaching to the disk or using the disk. I've created a pod yaml based on the mysql one from the above post but substituting the postgres docker image:</p> <pre><code>apiVersion: v1beta1 id: postgres desiredState: manifest: version: v1beta1 id: postgres containers: - name: postgres image: postgres env: - name: DB_PASS value: password cpu: 100 ports: - containerPort: 5432 volumeMounts: # name must match the volume name below - name: persistent-storage # mount path within the container mountPath: /var/lib/postgresql/data volumes: - name: persistent-storage source: persistentDisk: # This GCE PD must already exist and be formatted ext4 pdName: postgres-disk fsType: ext4 labels: name: postgres kind: Pod </code></pre> <p>However when I create</p> <pre><code>$ kubectl create -f postgres.yaml </code></pre> <p>I get the following errors:</p> <pre><code>$ kubectl logs postgres $ postgres cannot access the server configuration file "/var/lib/postgresql/data/postgresql.conf": No such file or directory </code></pre> <p>I can see that the postgres-disk is attached to a minion server, so I'm wondering if it's related to the Volumes in the docker image that I'm using, or if I need separate mount paths for the postgresql.conf file.</p> <p>Now if I change the mount path (eg mountPath: /var/lib/postgresql) the pod will start ok but it doesn't appear to be using the persistent data. Inspecting the volumes in the docker container on the minion gives me:</p> <pre><code>"Volumes": { "/dev/termination-log": "/var/lib/kubelet/pods/52177db4-149c-11e5-a64b-42010af06968/containers/postgres/91ecf33c939588b4165865f46b646677bf964fab81ea7ec08b598568513d4644", "/var/lib/postgresql": "/var/lib/kubelet/pods/52177db4-149c-11e5-a64b-42010af06968/volumes/kubernetes.io~gce-pd/pgdata", "/var/lib/postgresql/data": "/var/lib/docker/vfs/dir/c3ecda11de6a598d99842c06bee22097f1cb63a6e467cbe7af874d003140a4af", "/var/run/secrets/kubernetes.io/serviceaccount": "/var/lib/kubelet/pods/52177db4-149c-11e5-a64b-42010af06968/volumes/kubernetes.io~secret/default-token-b6s28" }, </code></pre> <p>I've also tried using a json file with v1beta3 with similar results:</p> <pre><code>{ "kind": "Pod", "apiVersion": "v1beta3", "metadata": { "name": "postgres" }, "spec": { "volumes": [{ "name": "pgdata", "gcePersistentDisk": { "pdName": "postgres-disk", "fsType": "ext4" } }], "containers": [ { "name": "postgres", "image": "postgres", "ports": [ { "name": "postgres-ports", "hostPort": 5432, "containerPort": 5432 } ], "env": [ { "name": "DB_USER", "value": "postgres" }, { "name": "DB_PASS", "value": "password" } ], "volumeMounts": [ { "name": "pgdata", "readOnly": false, "mountPath": "/var/lib/postgresql/data" } ] } ] } } </code></pre> <p>It's quite possible I've just missed something in the doco, but any help would be appreciated with this!</p>
<p>I'm getting the same problem as well - it's due to the db initialization script wrongly assuming a non-empty volume directory means it has already been initialized.</p> <p>Unfortunately, a new GCE persistence disk contains a <code>lost+found</code> directory.</p> <p>I've submitted a fix for the docker image <a href="https://github.com/docker-library/postgres/pull/72" rel="noreferrer">here</a>.</p>
<p>This is my first attempt to install and use Kubernetes. I am trying to install an environment on Mac for developing my own apps and deploying them for test locally with Kubernetes. I am familiar with using Vagrant, VirtualBox and Docker for the same purpose. When I saw this page <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/vagrant.md" rel="noreferrer">https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/vagrant.md</a> I assumed it would be trivial. I executed these lines:</p> <pre><code>export KUBERNETES_PROVIDER=vagrant curl -sS https://get.k8s.io | bash </code></pre> <p>This created a master VM and a Minion, but Kubernetes seems to have failed to start on the master. On the master /var/log/salt/master is full of python Traceback errors, like this:</p> <pre><code>2015-07-17 22:14:42,629 [cherrypy.error ][INFO ][3252] [17/Jul/2015:22:14:42] ENGINE Started monitor thread '_TimeoutMonitor'. 2015-07-17 22:14:42,736 [cherrypy.error ][ERROR ][3252] [17/Jul/2015:22:14:42] ENGINE Error in HTTP server: shutting down Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/cherrypy/process/servers.py", line 187, in _start_http_thread self.httpserver.start() File "/usr/lib/python2.7/site-packages/cherrypy/wsgiserver/wsgiserver2.py", line 1824, in start raise socket.error(msg) error: No socket could be created </code></pre> <p>Vagrant is version 1.7.3. VirtualBox is version 4.3.30</p> <p>Have I made an obvious stupid mistake?</p>
<p>If you just want to kick the tires with Kubernetes, I'd recommend installing <a href="http://boot2docker.io" rel="nofollow">boot2docker</a> and then following the <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/release-1.0/docs/getting-started-guides/docker.md" rel="nofollow">Running kubernetes locally via Docker</a> getting started guide. Once you are comfortable interacting with the Kubernetes API and want a more complex local setup, you can then work on installing Vagrant. </p> <p>If the Vagrant instructions aren't working, you should also feel free to file a bug in the github repository. </p>
<p>I am trying to set up a HTTP load balancer for my Meteor app on google cloud. I have the application set up correctly, and I know this because I can visit the IP given in the Network Load Balancer.</p> <p>However, when I try and set up a HTTP load balancer, the health checks always say that the instances are unhealthy (even though I know they are not). I tried including a route in my application that returns a status 200, and pointing the health check towards that route.</p> <p>Here is exactly what I did, step by step:</p> <ol> <li>Create new instance template/group for the app. </li> <li>Upload image to google cloud.</li> <li>Create replication controller and service for the app.</li> <li>The network load balancer was created automatically. Additionally, there were two firewall rules allowing HTTP/HTTPS traffic on all IPs.</li> </ol> <p>Then I try and create the HTTP load balancer. I create a backend service in the load balancer with all the VMs corresponding to the meteor app. Then I create a new global forwarding rule. No matter what, the instances are labelled "unhealthy" and the IP from the global forwarding rule returns a "Server Error".</p>
<p>In order to use HTTP load balancing on Google Cloud with Kubernetes, you have to take a slightly different approach than for network load balancing, due to the current lack of built-in support for HTTP balancing.</p> <p>I suspect you created your service in step 3 with <code>type: LoadBalancer</code>. This won't work properly because of how the LoadBalancer type is implemented, which causes the service to be available only on the network forwarding rule's IP address, rather than on each host's IP address.</p> <p>What will work, however, is using <code>type: NodePort</code>, which will cause the service to be reachable on the automatically-chosen node port on each host's external IP address. This plays more nicely with the HTTP load balancer. You can then pass this node port to the HTTP load balancer that you create. Once you open up a firewall on the node port, you should be good to go!</p> <p>If you want more concrete steps, a <a href="https://cloud.google.com/container-engine/docs/tutorials/http-balancer" rel="noreferrer">walkthrough of how to use HTTP load balancers</a> with Container Engine was actually recently added to GKE's documentation. The same steps should work with normal Kubernetes.</p> <p>As a final note, now that version 1.0 is out the door, the team is getting back to adding some missing features, including native support for L7 load balancing. We hope to make it much easier for you soon!</p>
<p>I would like to create a kubernetes pod that contains 2 containers, both with different images, so I can start both containers together.</p> <p>Currently I have tried the following configuration:</p> <pre><code>{ "id": "podId", "desiredState": { "manifest": { "version": "v1beta1", "id": "podId", "containers": [{ "name": "type1", "image": "local/image" }, { "name": "type2", "image": "local/secondary" }] } }, "labels": { "name": "imageTest" } } </code></pre> <p>However when I execute <code>kubecfg -c app.json create /pods</code> I get the following error:</p> <pre><code>F0909 08:40:13.028433 01141 kubecfg.go:283] Got request error: request [&amp;http.Request{Method:"POST", URL:(*url.URL)(0xc20800ee00), Proto:"HTTP/1.1", ProtoMajor:1, ProtoMinor:1, Header:http.Header{}, B ody:ioutil.nopCloser{Reader:(*bytes.Buffer)(0xc20800ed20)}, ContentLength:396, TransferEncoding:[]string(nil), Close:false, Host:"127.0.0.1:8080", Form:url.Values(nil), PostForm:url.Values(nil), Multi partForm:(*multipart.Form)(nil), Trailer:http.Header(nil), RemoteAddr:"", RequestURI:"", TLS:(*tls.ConnectionState)(nil)}] failed (500) 500 Internal Server Error: {"kind":"Status","creationTimestamp": null,"apiVersion":"v1beta1","status":"failure","message":"failed to find fit for api.Pod{JSONBase:api.JSONBase{Kind:\"\", ID:\"SSH podId\", CreationTimestamp:util.Time{Time:time.Time{sec:63545848813, nsec :0x14114e1, loc:(*time.Location)(0xb9a720)}}, SelfLink:\"\", ResourceVersion:0x0, APIVersion:\"\"}, Labels:map[string]string{\"name\":\"imageTest\"}, DesiredState:api.PodState{Manifest:api.ContainerMa nifest{Version:\"v1beta1\", ID:\"podId\", Volumes:[]api.Volume(nil), Containers:[]api.Container{api.Container{Name:\"type1\", Image:\"local/image\", Command:[]string(nil), WorkingDir:\"\", Ports:[]ap i.Port(nil), Env:[]api.EnvVar(nil), Memory:0, CPU:0, VolumeMounts:[]api.VolumeMount(nil), LivenessProbe:(*api.LivenessProbe)(nil)}, api.Container{Name:\"type2\", Image:\"local/secondary\", Command:[]string(n il), WorkingDir:\"\", Ports:[]api.Port(nil), Env:[]api.EnvVar(nil), Memory:0, CPU:0, VolumeMounts:[]api.VolumeMount(nil), LivenessProbe:(*api.LivenessProbe)(nil)}}}, Status:\"\", Host:\"\", HostIP:\"\ ", PodIP:\"\", Info:api.PodInfo(nil), RestartPolicy:api.RestartPolicy{Type:\"RestartAlways\"}}, CurrentState:api.PodState{Manifest:api.ContainerManifest{Version:\"\", ID:\"\", Volumes:[]api.Volume(nil ), Containers:[]api.Container(nil)}, Status:\"\", Host:\"\", HostIP:\"\", PodIP:\"\", Info:api.PodInfo(nil), RestartPolicy:api.RestartPolicy{Type:\"\"}}}","code":500} </code></pre> <p>How can I modify the configuration accordingly?</p> <p>Running kubernetes on a vagrant vm (yungsang/coreos).</p>
<p>Your pod spec file looks like invalid. According to <a href="http://kubernetes.io/v1.0/docs/user-guide/walkthrough/README.html#multiple-containers" rel="nofollow">http://kubernetes.io/v1.0/docs/user-guide/walkthrough/README.html#multiple-containers</a>, a valid multiple containers pod spec should like this</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: www spec: containers: - name: nginx image: nginx volumeMounts: - mountPath: /srv/www name: www-data readOnly: true - name: git-monitor image: kubernetes/git-monitor env: - name: GIT_REPO value: http://github.com/some/repo.git volumeMounts: - mountPath: /data name: www-data volumes: - name: www-data emptyDir: {} </code></pre> <p>Latest doc at <a href="http://kubernetes.io/docs/user-guide/walkthrough/#multiple-containers" rel="nofollow">http://kubernetes.io/docs/user-guide/walkthrough/#multiple-containers</a></p>
<p>This is my first attempt to install and use Kubernetes. I am trying to install an environment on Mac for developing my own apps and deploying them for test locally with Kubernetes. I am familiar with using Vagrant, VirtualBox and Docker for the same purpose. When I saw this page <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/vagrant.md" rel="noreferrer">https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/vagrant.md</a> I assumed it would be trivial. I executed these lines:</p> <pre><code>export KUBERNETES_PROVIDER=vagrant curl -sS https://get.k8s.io | bash </code></pre> <p>This created a master VM and a Minion, but Kubernetes seems to have failed to start on the master. On the master /var/log/salt/master is full of python Traceback errors, like this:</p> <pre><code>2015-07-17 22:14:42,629 [cherrypy.error ][INFO ][3252] [17/Jul/2015:22:14:42] ENGINE Started monitor thread '_TimeoutMonitor'. 2015-07-17 22:14:42,736 [cherrypy.error ][ERROR ][3252] [17/Jul/2015:22:14:42] ENGINE Error in HTTP server: shutting down Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/cherrypy/process/servers.py", line 187, in _start_http_thread self.httpserver.start() File "/usr/lib/python2.7/site-packages/cherrypy/wsgiserver/wsgiserver2.py", line 1824, in start raise socket.error(msg) error: No socket could be created </code></pre> <p>Vagrant is version 1.7.3. VirtualBox is version 4.3.30</p> <p>Have I made an obvious stupid mistake?</p>
<p>I don't yet know the fix but I know what is going wrong since it happens to me as well:</p> <ul> <li>OS X 10.10.3</li> <li>Vagrant 1.7.4</li> <li>VirtualBox 4.3.30</li> <li>Kubernetes 1.0.1</li> </ul> <p>When I run the default configuration of this (which creates one "master" and one "minion" VM) I see that the static IP address is not being assigned to the "eth1" interface, and I also see that the Salt API server is sitting in what appears to be an infinite retry loop because it is trying to listen on that IP address.</p> <p>Also, the following message happened during boot:</p> <pre><code>[vagrant@kubernetes-master ~]$ dmesg | grep eth1 [ 9.321496] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready </code></pre> <p>So basically, the static IP address didn't get assigned because eth1 wasn't ready when the system first booted, and Salt is waiting for it to get assigned.</p> <p>I could fix this after boot by sshing to the box using "vagrant ssh" and running the command:</p> <pre><code>sudo /etc/init.d/network restart </code></pre> <p>on each host.</p> <p>This "fixes" eth1 by assigning the static IP address, and after that Salt begins to do its thing, installs Docker, boots various containers, and so on.</p> <p>What I don't know is how to make this work every time without manual intervention. It appears to be some sort of a race condition between Vagrant and VirtualBox.</p>
<p>I'm very interesting in the new Google Cloud Service: Google Container Engine, namely in be a able to write systems that can scale using containers' properties.</p> <p>I saw the StackOverflow questions:</p> <ul> <li><a href="https://stackoverflow.com/questions/26899789/autoscaling-in-google-container-engine">Autoscaling in Google Container Engine</a></li> <li><a href="https://stackoverflow.com/questions/26899733/increasing-the-cluster-size-in-google-container-engine">Increasing the cluster size in Google Container Engine</a></li> </ul> <p>And I understood that the auto-scale (and other features) are planned, however, I didn't see any release dates.</p> <p>When are the referred auto-scale features/integrations be released/available?</p> <p>When will the Google Container Engine reach Beta (leave Alpha)?</p> <p>Does Google Container Engine have a roadmap with release dates that can be consulted?</p>
<p>It's worth noting that you can now do both Autoscaling and Resize a running Google Container Engine Cluster by using Instance Groups - <a href="https://cloud.google.com/compute/docs/instance-groups/" rel="nofollow">https://cloud.google.com/compute/docs/instance-groups/</a> </p>
<p>Batch computations, Monte Carlo, using Docker image, multiple jobs running on Google cloud and managed by Kubernetes. No Replication Controllers, just multiple pods with NoRestart policy delivering computed payloads to our server. So far so good. Problem is, I have cluster with <code>N</code> nodes/minions, and have <code>M</code> jobs to compute, where <code>M &gt; N</code>. So I would like to fire <code>M</code> pods at once and tell Kubernetes to schedule it in such a way so that only <code>N</code> are running at a given time, and everything else is kept in Pending state. As soon as one pod is done, next is scheduled to run moving from Pending to Running and so on and so forth till all <code>M</code> pods are done.</p> <p>Is it possible to do so? </p>
<p>Just for the record, after discussion with Alex, trial and error and a binary search for a good number, what worked for me was setting the CPU resource limit in the Pod JSON to:</p> <pre><code> "resources": { "limits": { "cpu": "490m" } } </code></pre> <p>I have no idea how and why this particular value influences the Kubernetes scheduler, but it keeps nodes churning through the jobs, with exactly one pod per node running at any given moment.</p>
<p>I am getting the following error when trying to run <code>kubectl</code> locally.</p> <p><code>error: couldn't read version from server: Get http://localhost:8080/api: dial tcp 127.0.0.1:8080: connection refused</code></p> <p>I know this relates to the kubectl config but I'm at a loss in how to resolve it. 2 days ago, I was experimenting with GKE and did set the config to point to GCE. I tried deleting this config file and then getting <a href="https://github.com/pires/kubernetes-vagrant-coreos-cluster">Vagrant with CoreOS locally</a>. This <code>vagrant up</code> throws a similar error complaining about not being able to connect. </p> <p>What is the appropriate way to instrument <code>kubectl</code> so it can connect to the API and return information?</p>
<p><strong>tl;dr</strong> <code>gcloud container get-credentials --cluster=CLUSTER_ID --zone=YOURZONE</code></p> <hr> <p>So a little background: the kubectl tool is developed by google but isn't actually integrated into google cloud directly, the google cloud just helps you get a compatible version with it when you tell it to install the component.</p> <p>If you're getting the <code>Get http://localhost:8080/api: dial tcp 127.0.0.1:8080: connection refused</code> it is likely due to the <code>kubectl</code> tool not being configured at all or misconfigured. What I believe it's trying to do is assuming you have kubernetes somehow setup locally only, which you don't in this case since it's all on the google cloud (hence the cryptic error).</p> <p>You can verify your kubectl is misconfigured by running <code>kubectl config view</code>. If it's correctly configured you should see things like a few entries in cluster, with ip addresses, and in users you should see a user for each project, etc. If you see nothing of the sort (ie. empty clusters, and empty users) then you are misconfigured; you will also encounter cryptic issues if you dont see entries for the specific cluster you are trying to work on.</p> <p>Annoyingly a lot of <code>gcloud</code> commands will silently auto-configure it for you, so if you follow something like a hello wordpress tutorial it will look like you dont have to do this and that somehow kubectl communicates with gcloud, but nothing of the sort happens. It's also very easy to lose that configuration.</p> <p>To tell gcloud to give you the kubectl config run the following:</p> <pre><code>gcloud container get-credentials --cluster=CLUSTER_ID --zone=YOURZONE </code></pre> <p>For cluster id run <code>gcloud container clusters list</code></p> <p>Zone is "europe-west1-d" or whatever you've chosen.</p>
<p>I am trying to see if there are any example to create a Kubernetes POD which starts 2-3 containers and these containers are linked with each other but couldn't find any.</p> <p>Does anybody tried linking containers using Kubernetes config.</p>
<p>The containers in same pod shares the localhost, so you need not link containers, just use localhost:containerPort.</p>
<p>Batch computations, Monte Carlo, using Docker image, multiple jobs running on Google cloud and managed by Kubernetes. No Replication Controllers, just multiple pods with NoRestart policy delivering computed payloads to our server. So far so good. Problem is, I have cluster with <code>N</code> nodes/minions, and have <code>M</code> jobs to compute, where <code>M &gt; N</code>. So I would like to fire <code>M</code> pods at once and tell Kubernetes to schedule it in such a way so that only <code>N</code> are running at a given time, and everything else is kept in Pending state. As soon as one pod is done, next is scheduled to run moving from Pending to Running and so on and so forth till all <code>M</code> pods are done.</p> <p>Is it possible to do so? </p>
<p>Yes, you can have them all ask for a resource of which there's only one on each node, then the scheduler won't be able to schedule more than N at a time. The most common way to do this is to have each pod ask for a <a href="https://github.com/kubernetes/kubernetes/blob/9030f16071f4f6d46f0482995f0db5eaf96b9294/pkg/api/v1/types.go#L802" rel="nofollow"><code>hostPort</code></a> in the <code>ports</code> section of its <code>containers</code> spec.</p> <p>However, I can't say I'm completely sure why you would want to limit the system to one such pod per node. If there are enough resources available to run multiple at a time on each node, it should speed up your job to let them run.</p>
<p>As part of scaling pods in kubernetes I want to ensure I gracefully serve my http connections before shutting down. To that extent I have implemented this code in go:</p> <pre><code>package main import ( "fmt" "io" "net/http" "os" "os/signal" "syscall" "github.com/braintree/manners" ) func main() { shutdown := make(chan int) //create a notification channel to shutdown sigChan := make(chan os.Signal, 1) //start the http server http.HandleFunc("/", hello) server := manners.NewWithServer(&amp;http.Server{Addr: ":80", Handler: nil}) go func() { server.ListenAndServe() shutdown &lt;- 1 }() //register for interupt (Ctrl+C) and SIGTERM (docker) signal.Notify(sigChan, os.Interrupt, syscall.SIGTERM) go func() { &lt;-sigChan fmt.Println("Shutting down...") server.Close() }() &lt;-shutdown } func hello(w http.ResponseWriter, r *http.Request) { // time.Sleep(3000 * time.Millisecond) io.WriteString(w, "Hello world!") } </code></pre> <p>This looks out for the docker SIGTERM and gracefully shuts down after existing requests have been served. When I run this container in kubernetes with 10 instances I can scale up and down without incident, as long as I don't scale down to a single instance. When I scale to a single instance I see a short set of http errors, then all looks fine again.</p> <p>I find it strange as in scaling I would assume the proxy is updated first, then containers are shut down and the code above would allow requests to be served out.</p> <p>In my current setup I am running 2 nodes, maybe the issue is when scaling drops below the number of nodes and there is some sort of timing issue with etcd updates? Any insight into what is going on here would be really useful</p>
<p>You should use a readiness check (<a href="http://kubernetes.io/v1.0/docs/user-guide/production-pods.html#liveness-and-readiness-probes-aka-health-checks" rel="nofollow noreferrer">http://kubernetes.io/v1.0/docs/user-guide/production-pods.html#liveness-and-readiness-probes-aka-health-checks</a>)</p> <p>that transitions the Pod to "not ready" after you receive a SIGTERM</p> <p>Once that happens, the service will remove the Pod from serving, prior to the delete. </p> <p>(without a readiness check the Service simply doesn't know that the pod doesn't exist, until it is actually deleted)</p> <p>You may also want to use a PreStop hook that sets readiness to false, and then drains all existing requests. PreStop hooks are called synchronously prior to a Pod being deleted and they are described here:</p> <p><a href="https://kubernetes-v1-4.github.io/docs/user-guide/production-pods/#lifecycle-hooks-and-termination-notice" rel="nofollow noreferrer">https://kubernetes-v1-4.github.io/docs/user-guide/production-pods/#lifecycle-hooks-and-termination-notice</a></p>
<p>Can someone please explain the advantages/disadvantages using the following when building container images, rather than using the dockerfile.</p> <ol> <li><p>Packer - tool for creating machine and container images for multiple platforms from a single source configuration</p></li> <li><p>Dockramp - A Client-driven Docker Container Image Builder</p></li> </ol>
<ol> <li><p>Packer is a tool that initially was created to create AWS AMIs or base VM images in AWS. It has been extended to be used with containers, a lot of different Virtualization software such as <a href="https://www.vmware.com" rel="nofollow">VMware</a>, <a href="http://wiki.qemu.org/KVM" rel="nofollow">KVM/QEMU</a>, and other cloud/IaaS providers like <a href="https://www.digitalocean.com/" rel="nofollow">DigitalOcean</a>. It was developed by <a href="https://hashicorp.com/" rel="nofollow">Hashicorp</a> but it's open source.</p></li> <li><p><a href="https://github.com/jlhawn/dockramp" rel="nofollow">Dockramp</a> is an alternative to using <code>docker build</code> it uses the same <code>Dockerfile</code> that <code>docker build</code> would use but with some additional enhancements. For example, it can use accept <a href="https://en.wikipedia.org/wiki/Here_document" rel="nofollow">heredocs</a> in the <code>RUN</code> command for multiple line bash commands.</p></li> </ol> <p>Docker/LXC is fairly fast but the main advantage of building images (and this applies to Virtualization images too) is that you can have a fully installed application or application stack from the get-go. This tends to work better in autoscaling environments because it takes less time for your application to start servicing traffic either in a container or VM. </p> <p>When you are creating your docker image all the time with your Dockerfile it needs to run a series of steps before becoming 'ready', hence it might take longer to start servicing traffic.</p> <p>Hope it helps.</p>
<p>I successfully deployed Kubernetes on AWS using "getting started on AWS ec2 guide" (<a href="http://kubernetes.io/v1.0/docs/getting-started-guides/aws.html" rel="nofollow">http://kubernetes.io/v1.0/docs/getting-started-guides/aws.html</a>), but the disk size of all the minions (kubernetes hosts) is 8gb. I would like to increase the disk size, but I haven't found a way to do it.</p> <p>I can change the VM size by setting MINION_SIZE (e.g. export MINION_SIZE=m3.medium) prior to installing, but the disk size is still 8gb.</p> <p>From the Kubernetes install instructions for other cloud providers there's an option to set MINION_DISK_SIZE to set the disk size. I tried that with AWS ec2 installation, and the variable is ignored.</p> <p>I also poked around the config files, but I didn't see anything obvious.</p> <p>Any suggestions on how to set the disk size for minions when installing Kubernetes on AWS ec2?</p>
<p>I recently stumbled upon the same issue. Have a look at BLOCK_DEVICE_MAPPINGS in <code>kubernetes/cluster/aws/util.sh</code>. You can modify it to have something more appropriate for a EBS-only minion. </p> <p>For example:</p> <pre><code>[{"DeviceName":"/dev/sda1","Ebs":{"VolumeSize":80}}] </code></pre> <p>AWS docs: <a href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html" rel="nofollow">http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html</a></p>
<p>Is there any known approach with which we can manage the Kubernetes cluster from a dashboard kind of UI? Kubernetes by default comes up with a UI which is good enough to view the details of running pods, services etc. </p> <p>But what is the approach if we need to modify some properties of Kubernetes cluster (<em>like increasing the replicas on RC, change auto-scaling policies etc.</em>) from UI rather than issuing kubectl commands?</p> <p>I had heard that <a href="http://thenewstack.io/kismatic-hopes-foster-ecosystem-contributing-webui-code-kubernetes-project/" rel="nofollow">Kismatic</a> is working towards achieving this same goal, but i am not quite sure how to configure from their <a href="https://github.com/kismatic" rel="nofollow">github</a> projects. </p> <p>I am using Google cloud for my projects. </p>
<p>The UI that runs by default in Kubernetes doesn't currently support modifying anything running in the cluster, and there aren't any other official UIs that do. It's something that we'd definitely like to improve in the future.</p> <p>However, the fabric8 folks have put together a console that does allow you to change what's running in a Kubernetes cluster in addition to viewing it. I haven't tried it myself so I can't vouch for it, but it may be worth checking out. There's a video demo <a href="https://vimeo.com/134408470">here</a>, with documentation <a href="http://fabric8.io/guide/console.html">here</a>.</p>
<p>I am running a Kubernetes cluster on Google container engine. My metrics are not getting pushed to Stackdriver by default.</p> <p>Do I need to start Heapster service explicitly or is it automatically managed by container engine itself?</p>
<p>If there isn't a Heapster pod running in your cluster, then your cluster was created before we started enabling cluster monitoring by default. </p> <p>We are working on adding a way for users to retroactively turn on monitoring, but if you want metrics pushed into stack driver today you will need to create a new cluster (launching Heapster yourself isn't sufficient for the metrics to get collected). </p>
<p>Have Kubernetes computation cluster running on GCE, reasonable happy so far. I know if I created K-cluster, I'll get to see nodes as VM Instances and cluster as Instance group. I would like to do other way around - create instances/group and make K-cluster out of it so it could be managed by Kubernetes. Reason I want to do so is to try and make nodes preemptible, which might better fit my workload.</p> <p>So question - Kubernetes cluster with preemptible nodes how-to. I could do either one or another now, but not together</p>
<p>There is a patch out for review at the moment (#<a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/12384" rel="nofollow">12384</a>) that makes a configuration option to mark the nodes in the instance group as preemptible. If you are willing to build from head, this should be available as a configuration option in the next couple of days. In the meantime, you can see from the patch how easy it is to modify the GCE startup scripts to make your VMs preemptible. </p>
<p>Kubernetes create a load balancer, for each service; automatically in GCE. How can I manage something similar on AWS?</p> <p>Kubernetes service basically use the kubeproxy to handle the internal traffic. But that kubeproxy ip its do not have access to the external network. </p> <p>There its a way to accomplish this?</p>
<p>In your service definition, set its <code>type</code> <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/7a3891e5f8819456b355750c0603e48da35b895b/pkg/api/v1/types.go#L1097" rel="noreferrer">field</a> to <code>LoadBalancer</code>, and kubernetes will automatically create an AWS Elastic Load Balancer for you if you're running on AWS. This feature should work on GCE/GKE, AWS, and OpenStack.</p> <p>For an example, check out the <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/examples/guestbook-go/guestbook-service.json" rel="noreferrer">guestbook-go example</a>.</p>
<p>Is probe frequency customizable in liveness/readiness probe?</p> <p>Also, how many times readiness probe fails before it removes the pod from service load-balancer? Is it customizable?</p>
<p>The probe frequency is controlled by the <code>sync-frequency</code> command line flag on the Kubelet, which defaults to syncing pod statuses once every 10 seconds.</p> <p>I'm not aware of any way to customize the number of failed probes needed before a pod is considered not-ready to serve traffic.</p> <p>If either of these features is important to you, feel free to <a href="https://github.com/GoogleCloudPlatform/kubernetes/issues/new" rel="nofollow">open an issue</a> explaining what your use case is or <a href="https://github.com/GoogleCloudPlatform/kubernetes/compare" rel="nofollow">send us a PR</a>! :)</p>
<p>A newly created Kubernetes cluster on GKE is not pushing its metrics to Stackdriver. Output of <code>kubectl cluster-info</code> is:</p> <pre><code>Kubernetes master is running at https://XXX.XXX.XXX.XXX KubeDNS is running at https://XXX.XXX.XXX.XXX/api/v1/proxy/namespaces/kube-system/services/kube-dns KubeUI is running at https://XXX.XXX.XXX.XXX/api/v1/proxy/namespaces/kube-system/services/kube-ui Heapster is running at https://XXX.XXX.XXX.XXX/api/v1/proxy/namespaces/kube-system/services/monitoring-heapster </code></pre> <p>When I try to create a dashboard on Stackdriver with 'Custom Metrics', it says 'No Match Found'. Metrics were supposed to be present at this location with 'kubernetes.io' prefix according to Heapster documentation.</p> <p>I have also enabled Cloud Monitoring API with Read Write permission while creating cluster. Is it required for pushing cluster metrics? </p>
<p>What Heapster does with the metrics depends on its configuration. When running as part of GKE, the metrics aren't exported as "custom" metrics, but rather as official GKE service metrics. The feature is still in an experimental, soft-launch state, but you should be able to access them at <a href="http://app.google.stackdriver.com/gke" rel="nofollow">app.google.stackdriver.com/gke</a></p>
<p>I'm trying to mount several secrets in the same secret-volume for a kubernetes pod blueprint, but it's not accepted.</p> <p>Tried and can't be created :</p> <pre><code>"spec": { "volumes": [ { "name": "secret-volume", "secret": [ { "secretName": "secret1" }, { "secretName": "secret2" }, { "secretName": "secret3" } ] } </code></pre> <p>Like this it will just override them:</p> <pre><code>"volumes": [ { "name": "secret-volume", "secret": { "secretName": "secret1" }, "secret": { "secretName": "secret2" }, "secret": { "secretName": "secret3" } } ] </code></pre> <p>Is there any way to mount a list of secrets in the same volume ? Or only 1 secret / volume at the moment?</p> <p>Thanks</p>
<p>Only one secret is allowed per secret volume</p>
<p>I have two service, S1 in cluster K1 and S2 in cluster K2. They have different hardware requirements. Service S1 needs to talk to S2.</p> <p>I don't want to expose Public IP for S2 due to security reasons. Using NodePorts on K2 cluster's compute instances with network load-balancing takes the flexibility out as I would have to add/remove K2's compute instances in target pool each time a node is added/removed in K2.</p> <p>Is there something like "service-selector" for automatically updating target-pool? If not, is there any other better approach for this use-case?</p>
<p>I can think of a couple of ways to access services across multiple clusters connected to the same GCP private network:</p> <ol> <li><p>Bastion route into k2 for all of k2's services:</p> <p>Find the <code>SERVICE_CLUSTER_IP_RANGE</code> for the k2 cluster. On GKE, it will be the <code>servicesIpv4Cidr</code> field in the output of cluster describe:</p> <pre><code>$ gcloud beta container clusters describe k2 ... servicesIpv4Cidr: 10.143.240.0/20 ... </code></pre> <p>Add an advanced routing rule to take traffic destined for that range and route it to a node in k2:</p> <pre><code>$ gcloud compute routes create --destination-range 10.143.240.0/20 --next-hop-instance k2-node-0 </code></pre> <p>This will cause <code>k2-node-0</code> to proxy requests from the private network for any of k2's services. This has the obvious downside of giving <code>k2-node-0</code> extra work, but it is simple.</p></li> <li><p>Install k2's kube-proxy on all nodes in k1.</p> <p>Take a look at the currently running kube-proxy on any node in k2:</p> <pre><code>$ ps aux | grep kube-proxy ... /usr/local/bin/kube-proxy --master=https://k2-master-ip --kubeconfig=/var/lib/kube-proxy/kubeconfig --v=2 </code></pre> <p>Copy k2's kubeconfig file to each node in k1 (say <code>/var/lib/kube-proxy/kubeconfig-v2</code>) and start a second kube-proxy on each node:</p> <pre><code>$ /usr/local/bin/kube-proxy --master=https://k2-master-ip --kubeconfig=/var/lib/kube-proxy/kubeconfig-k2 --healthz-port=10247 </code></pre> <p>Now, each node in k1 handles proxying to k2 locally. A little tougher to set up, but has better scaling properties.</p></li> </ol> <p>As you can see, neither solution is all that elegant. Discussions are happening about how this type of setup should ideally work in Kubernetes. You can take a look at the <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/proposals/federation.md">Cluster Federation</a> proposal doc (specifically the <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/proposals/federation.md#cross-cluster-service-discovery">Cross Cluster Service Discovery</a> section), and join the discussion by opening up issues/sending PRs.</p>
<p>I followed the official <a href="http://kubernetes.io/v1.0/docs/getting-started-guides/fedora/fedora_manual_config.html">Kubernetes installation guide</a> to install Kubernetes on Fedora 22 severs. Everything works out for me during the installation .</p> <p>After the installation. I could see all my nodes are up-running and connected to the master. However, it kept failing while I try to create a simple pod, according to the <a href="http://kubernetes.io/v1.0/docs/user-guide/walkthrough/README.html">101 guide</a>.</p> <pre><code>$ create -f pod-nginx.yaml </code></pre> <p>Error from server: error when creating "pod-nginx.yaml": Pod "nginx" is forbidden: no API token found for service account default/default, retry after the token is automatically created and added to the service account</p> <p>Do I need to create a API token? If yes, how?</p> <p>I googled the issue, but without any helpful results. Looks like I am the only one hit into the issue on this planet.</p> <p>Dose anyone have ideas on this?</p>
<p>The ServiceAccount admission controller prevents pods from being created until their service account in their namespace is initialized.</p> <p>If the controller-manager is started with the appropriate arguments, it will automatically populate namespaces with a default service account, and auto-create the API token for that service account.</p> <p>It looks like that guide needs to be updated with the information from this comment: <a href="https://github.com/GoogleCloudPlatform/kubernetes/issues/11355#issuecomment-127378691">https://github.com/GoogleCloudPlatform/kubernetes/issues/11355#issuecomment-127378691</a></p>
<p>I ran into some trouble getting an external ip address after posting the following json object (variables excluded):</p> <pre><code>$json= '{ "kind": "Service", "apiVersion": "v1", "metadata": { "name": "" }, "spec": { "ports": [{ "port": 80, "targetPort": 80 }], "selector": { "app": "" }, "type": "LoadBalancer" } }'; </code></pre> <p>The service is created but no external ip is ever given.</p> <p>Unable to determine where the issue lay, I proceeded to install a clean copy of kubernetes (and the cluster it is defined to install) using the following command provided in the documentation (V1 kubernetes/examples/simple-nginx.md):</p> <pre><code>curl -sS https://get.k8s.io | bash </code></pre> <p>This of course set things up automatically. I then ran the following commands to test if the LoadBalancer function was working:</p> <pre><code>kubectl run my-nginx --image=nginx --replicas=2 --port=80 </code></pre> <p>After running <code>kubectl get pods</code> to confirm that they were ready, I exposed the service:</p> <pre><code>kubectl expose rc my-nginx --port=80 --type=LoadBalancer </code></pre> <p>I then ran <code>kubectl get service</code> for the past few minutes, and no public ip is being provided..</p> <p>That cant be right?</p> <p><strong>EDIT</strong></p> <pre><code>kubectl get services NAME LABELS SELECTOR IP(S) PORT(S) kubernetes component=apiserver,provider=kubernetes &lt;none&gt; 10.0.0.1 443/TCP my-nginx run=my-nginx run=my-nginx 10.0.136.163 80/TCP kubectl get service my-nginx -o yaml apiVersion: v1 kind: Service metadata: creationTimestamp: 2015-08-11T11:44:02Z labels: run: my-nginx name: my-nginx namespace: default resourceVersion: "1795" selfLink: /api/v1/namespaces/default/services/my-nginx uid: 434751be-401e-11e5-a219-42010af0da43 spec: clusterIP: 10.x.xxx.xxx ports: - nodePort: 31146 port: 80 protocol: TCP targetPort: 80 selector: run: my-nginx sessionAffinity: None type: LoadBalancer status: loadBalancer: {} </code></pre> <p>After running (Thanks GameScripting):</p> <pre><code>kubectl describe service my-nginx </code></pre> <p>I saw the following error:</p> <pre><code>FirstSeen LastSeen Count From SubobjectPath Reason Message Tue, 11 Aug 2015 14:00:00 +0200 Tue, 11 Aug 2015 14:02:41 +0200 9 {service-controller } creating loadbalancer failed failed to create external load balancer for service default/my-nginx: googleapi: Error 403: Quota 'FORWARDING_RULES' exceeded. Limit: 15.0 </code></pre>
<p>After manually removing the Forwarding Rules Under "Networking->Load Balancing->Network Load Balancing" (Or you can use <code>gcloud compute forwarding-rules delete</code>) I was able to get public Ip's again. It seems somehow the forwarding rules werent deleted and reached the limit. It is strange as when I ran <code>Kubectl stop service</code> it removed the forwarding rule for me.</p>
<p>On this page in the Kubernetes docs <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/user-guide/pods.md" rel="noreferrer">Pods</a>, it states</p> <blockquote> <p>The context of the pod can be defined as the conjunction of several Linux namespaces:</p> <p>PID namespace (applications within the pod can see each other's processes) network namespace (applications within the pod have access to the same IP and port space)</p> <p>IPC namespace (applications within the pod can use SystemV IPC or POSIX message queues to communicate)</p> <p>UTS namespace (applications within the pod share a hostname)</p> </blockquote> <p>However, it then says that</p> <blockquote> <p>In terms of Docker constructs, a pod consists of a colocated group of Docker containers with shared volumes. PID namespace sharing is not yet implemented with Docker.</p> </blockquote> <p>So does this mean that pods cannot see processes in other containers or perform any kind of IPC between containers running in the same pod? How would I send a signal to a process running in another pod?</p>
<p>Yeah, we wish that they could share the PID namespace, but as you say, it is not currently supported by Docker. Once we have support in Docker, we will rapidly add it to Kubernetes.</p> <p>This means that you can't use signal to signal other processes in the Pod.</p> <p>You can, however, use IPC mechanisms like pipes and shared memory.</p>
<p>I Have the following setup in mind: Kubernetes on Mesos (based on the kubernetes-mesos project) within a /16 network. Each pod will have its own IP and I believe this will avail 64 000 pods. The idea is to provide isolation for each app i.e. Each app gets its own mysql within the same pod - the app accesses mysql on localhost(within the pod). If an additional service were needed, I'd use kubernetes rolling updates to add the service's container to the pod, the app will be able to access this new service on localhost as well. Each application needs as much isolation as possible.</p> <ol> <li>Are there any defects to such an implementation? </li> <li>Do I have to use weave? <ul> <li>There's an option to specify the service-ip-range while running the kubernetes-mesos install.</li> </ul></li> <li>One hole is how do I scale a service, is this really viable? </li> <li>Is there a better way to do this? i.e. Offering isolated services</li> </ol> <p>Thanks. PS//I'm obviously a noobie at this and I'm trying to get the best possible setup running. </p>
<p>A common misconception is that a Pod should manage a vertical, multi-tier stack: for example a web tier + DB tier together.</p> <p>It's interesting to read the <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/user-guide/pods.md#uses-of-pods" rel="nofollow">Kubernetes design intent of Pods</a>: they're for collecting 'helper' processes rather than composing a vertical stack.</p> <p>To answer your questions, I'd recommend:</p> <ul> <li>Define a Pod template for the web tier only. This can be scaled to any size required, using a replication controller (questions #1 and #3).</li> <li>Define another Pod for MySQL.</li> <li>Use the <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/user-guide/services.md" rel="nofollow">Service abstraction</a> to locate these components.</li> </ul>
<p>I want to use kubernetes as my default development environment for that I set up the cluster locally with docker as explained in the <a href="http://kubernetes.io/v1.0/docs/getting-started-guides/docker.html" rel="nofollow noreferrer">official doc</a>. I push my example to a <a href="https://github.com/bitgandtter/k8s_devel_env" rel="nofollow noreferrer">github repository</a></p> <p>My set up steps after having a kubernetes cluster running were:</p> <pre><code>* cd cluster_config/app &amp;&amp; docker build --tag=k8s_php_dev . &amp;&amp; cd ../.. * kubectl -s http://127.0.0.1:8080 create -f cluster_config/app/app.rc.yml * kubectl -s http://127.0.0.1:8080 create -f cluster_config/app/app.services.yml </code></pre> <p>My issues comes since I want to map a local directory as a volume inside my app pod so I can share dynamically the files in there between my local host and the pod, so i can develop, change the files; and dynamically update on the service.</p> <p>I use a a volume with a hostPath. The pod, replication controller and service are created successfully but the pod do not share the directory not even have the file on the supposed on the mountPath.</p> <p>What I'm doing wrong?</p> <p>Thanks</p>
<p>The issue was on the volume definition, the hostPath.path property should hold the absolute address of the directory to mount.</p> <p>Example:</p> <pre><code>hostPath: path: /home/bitgandtter/Documents/development/php/k8s_devel_env </code></pre>
<p>I am trying to build my image using this plugin: <a href="https://github.com/spotify/docker-maven-plugin#use-a-dockerfile">https://github.com/spotify/docker-maven-plugin#use-a-dockerfile</a></p> <p>When I run <code>mvn clean package docker:build</code></p> <p>I get this error:</p> <pre><code>[ERROR] Failed to execute goal com.spotify:docker-maven-plugin:0.2.3:build (defa ult-cli) on project demo: Exception caught: Request error: POST https://192.168. 99.100:2376/v1.12/build?t=DevOpsClient: 500: HTTP 500 Internal Server Error -&gt; [ Help 1] </code></pre> <p>When I check the docker daemon logs, I see this:</p> <pre><code>Handler for POST /build returned error: repository name component must match \"[a-z0-9]+(?:[._-][a-z0-9]+)*\"" statusCode=500 </code></pre> <p>Here is the doc for the naming convention: <a href="https://docs.docker.com/registry/spec/api/">https://docs.docker.com/registry/spec/api/</a></p> <p>Apparently you cannot have any upper case letters.</p> <p>I am trying to build using Spring boot my following this guide: <a href="https://spring.io/guides/gs/spring-boot-docker/">https://spring.io/guides/gs/spring-boot-docker/</a></p> <p>I am using a SNAPSHOT release of spring boot and I have a directory named demo-0.1.1-SNAPSHOT. I believe this may be causing the problem.</p> <p>Also I am working on windows and my project directory path is like:</p> <pre><code>C:\Users\myname\UserRegistrationClient\git\..... etc </code></pre> <p>Would this also affect the repository naming convention?</p> <p>And how would I change it? </p>
<p>So this regular expression: <code>[a-z0-9]+(?:[._-][a-z0-9]+)*</code> doesn't include any upper case letters. So you should change your image name to <code>devopsclient</code></p>
<p>I'm running Apache2 in a docker container and want to write nothing to the disk, writing logs to stdout and stderr. I've seen a few different ways to do this (Supervisord and stdout/stderr, Apache access log to stdout) but these seem like hacks. Is there no way to do this by default?</p> <p>To be clear, I do not want to tail the log, since that will result in things being written to the disk in the container.</p> <p>The "official" version checked into Docker Hub (<a href="https://hub.docker.com/_/httpd/" rel="noreferrer">https://hub.docker.com/_/httpd/</a>) still write to disk.</p> <p>Also, what do I need to do to stop Apache from failing when it tries to roll the logs?</p> <p>One other thing - ideally, I'd really like to do this without another add-on. nginx can do this trivially.</p>
<p>I'm not positive that this won't mess with httpd's logging at all (e.g. if it tries to seek within the file), but you can set up symlinks from the log paths to /dev/stdout and /dev/stderr, like so:</p> <pre><code>ln -sf /dev/stdout /path/to/access.log ln -sf /dev/stderr /path/to/error.log </code></pre> <p>The entry command to the vanilla httpd container from Docker Hub could be made to be something like</p> <pre><code>ln -sf /dev/stdout /path/to/access.log &amp;&amp; ln -sf /dev/stderr /path/to/error.log &amp;&amp; /path/to/httpd </code></pre>
<p>I am planning to deploy a production deployment with 'Kubernetes + CoreOS' in Microsoft. And planning to run couple of micro services in the cluster. My plan is to have 5 nodes, I will have 5/6 pods to run each will have 3-5 instances. I was following the official documentation of Kubernetes, I found <a href="https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/getting-started-guides/coreos/azure/README.md" rel="nofollow">https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/getting-started-guides/coreos/azure/README.md</a> is really helpful, the script works awesome. But I don't think that its production ready for my use case , as </p> <ul> <li><p>the deployed VMs are not assigned to Availability Sets</p></li> <li><p>Not able to specify an existing Virtual Network, Resources, location etc. </p></li> </ul> <p>I am a newbie in this field. Can someone help me out to let me know what all steps to be taken to make this a real production environment ? </p>
<blockquote> <p>the deployed VMs are not assigned to Availability Sets</p> </blockquote> <p>It is true indeed, as an author and maintainer of the guide, I will welcome a pull-request to enable this, which should be quite easy and probably similar how <a href="https://github.com/kubernetes/kubernetes/blob/7d936fe4cabac9365e5ff4abf6e48e93bf124efa/docs/getting-started-guides/coreos/azure/lib/azure_wrapper.js#L144" rel="nofollow">affinity groups are currently handled</a>.</p> <blockquote> <p>Not able to specify an existing Virtual Network, Resources, location etc.</p> </blockquote> <p>This is a very good point, however it's probably best to refactor current ad-hoc JavaScript wrapping to something more streamlined with <a href="https://azure.microsoft.com/en-gb/documentation/articles/resource-group-overview/" rel="nofollow">Azure Resource Manager</a>, which hasn't been generally available at the time I implemented that integration.</p>
<p>I started to use Docker and I'm trying out Google's Kubernetes project for my container orchestration. It looks really good!</p> <p>The only thing I'm curious of is how I would handle the volume storage.</p> <p>I'm using EC2 instances and the containers do volume from the EC2 filesystem.</p> <p>The only thing left is the way I have to deploy my application code into all those EC2 instances, right? How can I handle this?</p>
<p>It's somewhat unclear what you're asking, but a good place to start would be reading about your options for <a href="https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/volumes.md" rel="nofollow"><code>volumes</code> in Kubernetes</a>.</p> <p>The options include using local EC2 disk with a lifetime tied to the lifetime of your pod (<code>emptyDir</code>), local EC2 disk with lifetime tied to the lifetime of the node VM (<code>hostDir</code>), and an Elastic Block Store volume (<code>awsElasticBlockStore</code>).</p>
<p>i try to use AWS to setup kubernetes(version 1.0.1) and deploy a few services and pods there.</p> <p>but i have got stuck with LoadBalancer service. According to the docs i just need to setup correct type of service and open ports in firewall</p> <p>But service doesn't receive external IP. (ingress is empty)</p> <p>Do i need to create LoadBalancer manually in AWS console? maybe some another actions?</p> <p>Thanks,</p>
<p>The LoadBalancer should be getting created automatically.</p> <p>There might be IAM policy issues preventing the load balancer from being created (see <a href="https://github.com/kubernetes/kubernetes/issues/10692" rel="nofollow">Issue #10692</a>).</p> <p>If that isn't the problem, looking for errors in <code>/var/log/kube-controller-manager.log</code> on the master VM may give you an idea of what is going wrong.</p>
<p>What is the best practice to get Geo distributed cluster with asynchronous network channels ? </p> <p>I suspect I would need to have some "load balancer" which should redirect connections "within" it's own DC, do you know anything like this already in place?</p> <p>Second question, should we use one HA cluster or create dedicated cluster for each of the DC ?</p>
<p>The assumption of the kubernetes development team is that cross-cluster federation will be the best way to handle cross-zone workloads. The tooling for this is easy to imagine, but has not emerged yet. You can (on your own) set up regional or global load-balancers and direct traffic to different clusters based on things like GeoIP.</p>
<p>I have a Kubernetes cluster running in Amazon EC2 inside its own VPC, and I'm trying to get Dockerized services to connect to an RDS database (which is in a different VPC). I've figured out the peering and routing table entries so I can do this from the minion machines:</p> <pre><code>ubuntu@minion1:~$ psql -h &lt;rds-instance-name&gt; Password: </code></pre> <p>So that's all working. The problem is that when I try to make that connection from inside a Kubernetes-managed container, I get a timeout:</p> <pre><code>ubuntu@pod-1234:~$ psql -h &lt;rds-instance-name&gt; … </code></pre> <p>To get the minion to connect, I configured a peering connection, set up the routing tables from the Kubernetes VPC so that <code>10.0.0.0/16</code> (the CIDR for the RDS VPC) maps to the peering connection, and updated the RDS instance's security group to allow traffic to port 5432 from the address range <code>172.20.0.0/16</code> (the CIDR for the Kubernetes VPC).</p>
<p>With the help of Kelsey Hightower, I solved the problem. It turns out it was a Docker routing issue. I've written up the details in a <a href="http://ben.straub.cc/2015/08/19/kubernetes-aws-vpc-peering/" rel="noreferrer">blog post</a>, but the bottom line is to alter the minions' routing table like so:</p> <pre><code>$ sudo iptables -t nat -I POSTROUTING -d &lt;RDS-IP-ADDRESS&gt;/32 -o eth0 -j MASQUERADE </code></pre>
<p>I have been going through this to get started with Kubernetes, running on Mac OS v 10.9.5, Vagrant v 1.7.4, VirtualBox v 4.3.28, and Kubernetes v 1.0.3.: <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/vagrant.md">https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/vagrant.md</a></p> <p>Full install log can be found here: <a href="http://mcdonaldland.info/files/kubernetes/install-log.txt">http://mcdonaldland.info/files/kubernetes/install-log.txt</a></p> <p>I've tried both the download and sh install versions of this.</p> <p>I've run this multiple times and every time it gets to the "waiting for each minion to be registered with cloud provider" step and loops forever. Eventually, I "CTRL+C" and exit the script. In looking at the source it obviously has something to do with the nodes not being found, but I can't figure out why it cannot find them.</p> <p>After killing the script, running 'kubectl get nodes' returns nothing. The same happens if I used the SH script. </p> <p>If I am in the ~/kubernetes/cluster director and run 'vagrant ssh master' or 'vagrant ssh minion-1' I can connect to them. If I run some scripts to add pods I can get them to register. Same with Replication Controllers. </p> <p>When I check status the pods will never start. When I dig into the logs it appears that the nodes cannot be connected to (aka found) and the minions are erroring on startup.</p> <p>I have been struggling for 5 days now to figure out why my nodes are not showing up / registering properly. I figure I'm missing something simple but am at a loss now. </p> <p>Any help is appreciated. Thanks in advance.</p>
<p>The reason is a bug in kubernetes. It <a href="https://github.com/kubernetes/kubernetes/issues/12854" rel="noreferrer">seems to be a TLS error</a>.</p> <p>If you manually download <a href="https://github.com/kubernetes/kubernetes/releases/tag/v1.0.1" rel="noreferrer">kubernetes 1.0.1</a> you will get closer. However, there's been a bugfix that you'll need to patch into 1.0.1 to make it work properly with vagrant. Otherwise, network provisioning will not work and you'll run into <a href="https://github.com/kubernetes/kubernetes/issues/12285#issuecomment-128613056" rel="noreferrer">this issue</a>. </p> <p>So, as suggested there, apply <a href="https://github.com/kubernetes/kubernetes/pull/12237/files" rel="noreferrer">these changes</a> to the provision scripts of v1.0.1 and you'll be good to go. Simple, right?</p>
<p>We are currently moving towards microservices with Docker from a monolith application running in JBoss. I want to know the platform/tools/frameworks to be used to test these Docker containers in developer environment. Also what tools should be used to deploy these containers to this developer test environment.</p> <p>Is it a good option to use some thing like Kubernetes with chef/puppet/vagrant?</p>
<p>I think so. Make sure to get service discovery, logging and virtual networking right. For the former you can check out skydns. Docker now has a few logging plugins you can use for log management. For virtual networking you can look for Flannel and Weave.</p> <p>You want service discovery because Kubernetes will schedule the containers the way it sees fit and you need some way of telling what IP/port your microservice will be at. Virtual networking make it so each container has it's own subnet thus preventing port clashes in case you have two containers with the same ports exposed in the same host (kubernetes won't let it clash, it will schedule containers to run until you have hosts with ports available, if you try to create more it just won't run).</p> <p>Also, you can try the built-in cluster tools in Docker itself, like docker service, docker network commands and Docker Swarm.</p> <p>Docker-machine helps in case you already have a VM infrastructure in place.</p>
<p>Is there a way to get the status of a deployment? The concept of deployment is modeled somehow? I can't find a "global" info on failures for pods relevant to a particular and I don't know if looking for all the pods makes sense... As "failure" I mean, for example, if I misspell the docker URL of the image... of course, I could just query one pod at random... but I'm not sure it would be the best idea to spot problems that are "common" to the whole deployment. What do you think? What is your proposed approach in this scenario? </p> <p>Thanks in advance. </p>
<p>The best way to do this at the moment is using </p> <pre><code>kubectl describe rc &lt;RC_NAME&gt; </code></pre> <p>There's a proposal for implementing a deployment resource to cope with this use case: <a href="https://github.com/kubernetes/kubernetes/blob/55b7500b33553a77f8dceb5404d6af1767399386/docs/proposals/deployment.md" rel="nofollow">https://github.com/kubernetes/kubernetes/blob/55b7500b33553a77f8dceb5404d6af1767399386/docs/proposals/deployment.md</a> </p>
<p>I have several pods, for example a python web app and a redis(shared by other apps), so I need to place redis in a separate pod. But they are all use the same subnet from docker(172.17.0.0/16) or even the same ip address. how can app pods talk with redis pod?</p> <p>Maybe what I want ask is what's the best way to setup multi-host conainer networking.</p> <hr> <p>7 weeks later, I get more familiar with kubernetes. I know kubernetes has network assuming that pods can access each other. so if you app service pod need to access a shared service(redis) pod, you need expose the the shared service as a kubernetes service, then you can get the shared service endpoint from app pods' environment variables or hostname.</p>
<p>How did you set up Kubernetes? I'm not aware of any installation scripts that put pod IPs into a 172 subnet.</p> <p>But in general, assuming Kubernetes has been set up properly (ideally using one of the provided scripts), using a <a href="https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/services.md" rel="nofollow">service object</a> to load balance across your 1 or more redis pods would be the standard approach.</p>
<p>We are running out of disk space for containers running on our nodes. We are running k8s 1.0.1 in aws. We are also trying to do all our configuration in software instead of manually configuring things. How do we increase the disk size of the nodes? Right now they have 8gb each as created by <a href="https://get.k8s.io" rel="nofollow">https://get.k8s.io</a> | bash. It's fine if we have to create a new cluster and move our services/pods to it.</p>
<p>You should be able to do so setting the <code>MINION_ROOT_DISK_SIZE</code> environment variable before creating the cluster. However this option was <a href="https://github.com/kubernetes/kubernetes/pull/11459" rel="nofollow">just merged yesterday</a>, so it may not be available yet unless you use the cluster/kube-up.sh script from HEAD of the repository.</p>
<p>I am trying to setup Kubernetes for the first time. I am following the Fedora Manual installation guide: <a href="http://kubernetes.io/v1.0/docs/getting-started-guides/fedora/fedora_manual_config.html">http://kubernetes.io/v1.0/docs/getting-started-guides/fedora/fedora_manual_config.html</a></p> <p>I checked the logs of my API server and am getting this error:</p> <pre><code> server.go:464] Unable to listen for secure (open /var/run/kubernetes/apiserver.crt: no such file or directory); will try again. </code></pre> <p>I assume it needs some sort of cert but the installation guide doesnt mention anything about this. Here is what my apiserver config file looks like</p> <pre><code># The address on the local server to listen to. KUBE_API_ADDRESS="--address=0.0.0.0" # The port on the local server to listen on. KUBE_API_PORT="--port=8080" # Port node listen on KUBELET_PORT="--kubelet_port=10250" # Location of the etcd cluster #KUBE_ETCD_SERVERS="--etcd_servers=http://vagrant-master:4001" KUBE_ETCD_SERVERS="--etcd_servers=http://127.0.0.1:4001" # Address range to use for services KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" # default admission control policies KUBE_ADMISSION_CONTROL="--admission_control=NamespaceAutoProvision,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota" # Add your own! KUBE_API_ARGS="--service_account_key_file=/etc/kubernetes/certs/serviceaccount.key" </code></pre> <p>Here is my service status</p> <pre><code>kube-apiserver.service - Kubernetes API Server Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled) Active: active (running) since Mon 2015-08-24 15:03:07 UTC; 5min ago Docs: https://github.com/GoogleCloudPlatform/kubernetes Main PID: 13663 (kube-apiserver) CGroup: /system.slice/kube-apiserver.service └─13663 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd_servers=http://127.0.0.1:4001 --address=0.0.0.0 --port=8080 --kubelet_port=10250 --allow_privileged=false --service-cluster-ip-range=10.254.0.0/16 --admission_control=NamespaceAutoProvision,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota --service_account_key_file=/etc/kubernetes/certs/serviceaccount.key </code></pre> <p>How can i fix this error?</p>
<p>By default, the kube-apiserver process tries to open a secure (https) server port on port 6443 using credentials from the directory <code>/var/run/kubernetes</code>. If you want to disable the secure port, you can pass <code>--secure-port=0</code> which should make your error go away. Alternatively, you can <a href="https://kubernetes.io/docs/tasks/administer-cluster/certificates/" rel="nofollow noreferrer">manually create certificates</a> for your cluster so that the process is able to successfully open the secure port.</p>
<p>As previously reported <a href="https://github.com/kubernetes/kubernetes/issues/11752" rel="nofollow noreferrer">here</a>, <strong>two pods can't mount the same disk even though one of them tries to do it as read-only mode</strong>.</p> <p>This is supposed to be allowed from Kubernetes documentation.</p> <h3>Mounting scheme is:</h3> <ul> <li>UniqueCluster/Pod<strong>A</strong> has successfully mounted gdeDisk1 as read-write</li> <li>UniqueCluster/Pod<strong>B</strong> fails to start when mounting gdeDisk1 as read-only</li> </ul> <h3>Node description:</h3> <pre><code>Name: gke-zupcat-cluster-8fd35d81-node-1zr4 Labels: kubernetes.io/hostname=gke-zupcat-cluster-8fd35d81-node-1zr4 CreationTimestamp: Wed, 22 Jul 2015 14:47:56 -0300 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message Ready True Thu, 23 Jul 2015 12:06:18 -0300 Wed, 22 Jul 2015 22:53:34 -0300 kubelet is posting ready status Addresses: 10.240.17.72,146.148.79.174 Capacity: cpu: 2 memory: 7679608Ki pods: 40 Version: Kernel Version: 3.16.0-0.bpo.4-amd64 OS Image: Debian GNU/Linux 7 (wheezy) Container Runtime Version: docker://Unknown Kubelet Version: v1.0.1 Kube-Proxy Version: v1.0.1 PodCIDR: 10.108.0.0/24 ExternalID: 11953122931827361742 Pods: (5 in total) Namespace Name default fastrwdiskpod-yu517 kube-system fluentd-cloud-logging-gke-zupcat-cluster-8fd35d81- node-1zr4 kube-system kube-dns-v8-i3h20 kube-system kube-ui-v1-8zdrq kube-system monitoring-heapster-v5-e1zmi No events. </code></pre> <h3>Products versions:</h3> <pre><code>Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;0&quot;, GitVersion:&quot;v1.0.0&quot;, GitCommit:&quot;cd821444dcf3e1e237b5f3579721440624c9c4fa&quot;, GitTreeState:&quot;clean&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;0&quot;, GitVersion:&quot;v1.0.1&quot;, GitCommit:&quot;6a5c06e3d1eb27a6310a09270e4a5fb1afa93e74&quot;, GitTreeState:&quot;clean&quot;} docker version Docker version 1.7.1, build 786b29d </code></pre>
<p>According to the <a href="https://cloud.google.com/compute/docs/disks/persistent-disks#use_multi_instances" rel="nofollow">GCE persistent disk documentation</a>: "if you attach a persistent disk to multiple instances, all instances must attach the persistent disk in read-only mode."</p> <p>The Kubernetes <a href="http://kubernetes.io/v1.0/docs/user-guide/volumes.html#gcepersistentdisk" rel="nofollow">documentation for GCE PD volumes</a> also explains this limitation: "A feature of PD is that they can be mounted as read-only by multiple consumers simultaneously. This means that you can pre-populate a PD with your dataset and then serve it in parallel from as many pods as you need. Unfortunately, PDs can only be mounted by a single consumer in read-write mode - no simultaneous readers allowed."</p>
<p>I was just wondering how can i point to two env-files, say <code>/var/app/old/file.conf</code> and <code>/var/market/old/db.conf</code> , both in the command line as I find there is support for mutiple -e flags. Does Docker supports pointing to multiple env-file in a command line like the one below</p> <pre><code>docker run -d --hostname=158.64.72.80 -d -p 80:80 --env-file /var/app/old/file.conf --env-file /var/market/old/db.conf </code></pre>
<p>You can do exactly like the command you are running already. For example:</p> <p>file1</p> <pre><code>GGG=/home/ppp </code></pre> <p>file2</p> <pre><code>HHH=/ter/ssd </code></pre> <p>Then run the Docker command:</p> <pre><code>docker run -it --env-file=/Users/user/file1 --env-file=/Users/users/file2 centos:6.6 /bin/bash </code></pre> <p>Then once in the container:</p> <pre><code>[user@99964c311fef ~]# env HOSTNAME=99964c311fef TERM=xterm OLDPWD=/ LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.tbz=01;31:*.tbz2=01;31:*.bz=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin PWD=/user GGG=/home/ppp # &lt;-- Here's file1 LANG=en_US.UTF-8 SHLVL=1 HOME=/user LESSOPEN=||/usr/bin/lesspipe.sh %s HHH=/ter/ssd # &lt;-- Here's file2 G_BROKEN_FILENAMES=1 _=/usr/bin/env [user@99964c311fef ~]# </code></pre>
<p>So my objective here is to set up a cluster of several kafka-brokers in a distributed fashion. But I can't see the way to make the brokers aware of each other.</p> <p>As far as i understand, every broker needs a separate ID in their config, which I cannot guarantee or configure if I launch the containers from kubernetes?</p> <p>They also need to have the same advertised_host?</p> <p>Are there any parameters I'm missing that would need to be changed for the nodes to discover each other?</p> <p>Would it be viable to do such a configuration at the end of the Dockerfile with a script? And/or a shared volume?</p> <p>I'm currently trying to do this with the spotify/kafka-image which has a preconfigured zookeeper+kafka combination, on vanilla Kubernetes.</p>
<p>My solution for this has been to <strong>use the IP as the ID</strong>: trim the dots and you get a unique ID that is also available outside of the container to other containers.</p> <p>With a Service you can get access to the multiple containers's IPs (see my answer here on how to do this: <a href="https://stackoverflow.com/questions/32177869/whats-the-best-way-to-let-kubenetes-pods-communicate-with-each-other/32233375#32233375">what&#39;s the best way to let kubenetes pods communicate with each other?</a></p> <p>so you can get their IDs too if you use IPs as the unique ID. The only issue is that IDs are not continuous or start at 0, but zookeeper / kafka don't seem to mind.</p> <p><strong>EDIT 1:</strong></p> <p>The follow up concerns configuring Zookeeper:</p> <p>Each ZK node needs to know of the other nodes. The Kubernetes discovery service knowns of nodes that are within a <strong>Service</strong> so the idea is to start a <strong>Service</strong> with the ZK nodes.</p> <p><em>This Service needs to be started BEFORE creating the ReplicationController (RC) of the Zookeeper pods.</em></p> <p>The start-up script of the ZK container will then need to: </p> <ul> <li>wait for the discovery service to populate the ZK Service with its nodes (that takes a few seconds, for now I just add a sleep 10 at the beginning of my startup script but more reliably you should look for the service to have at least 3 nodes in it.)</li> <li>look up the containers forming the Service in the discovery service: this is done by querying the API. the <code>KUBERNETES_SERVICE_HOST</code> environment variable is available in each container. The endpoint to find service description is then</li> </ul> <p><code>URL="http(s)://$USERNAME:$PASSWORD@${KUBERNETES_SERVICE_HOST/api/v1/namespaces/${NAMESPACE}/endpoints/${SERVICE_NAME}"</code></p> <p>where <code>NAMESPACE</code> is <code>default</code> unless you changed it, and <code>SERVICE_NAME</code> would be zookeeper if you named your service zookeeper.</p> <p>there you get the description of the containers forming the Service, with their ip in a "ip" field. You can do:</p> <pre><code>curl -s $URL | grep '\"ip\"' | awk '{print $2}' | awk -F\" '{print $2}' </code></pre> <p>to get the list of IPs in the Service. With that, populate the zoo.cfg on the node using the ID defined above</p> <p>You might need the <em>USERNAME</em> and <em>PASSWORD</em> to reach the endpoint on services like google container engine. These need to be put in a <strong>Secret</strong> volume (see doc here: <a href="http://kubernetes.io/v1.0/docs/user-guide/secrets.html" rel="nofollow noreferrer">http://kubernetes.io/v1.0/docs/user-guide/secrets.html</a> )</p> <p>You would also need to use <code>curl -s --insecure</code> on Google Container Engine unless you go through the trouble of adding the CA cert to your pods</p> <p>Basically add the volume to the container, and look up the values from file. (contrary to what the doc says, DO NOT put the \n at the end of the username or password when base64 encoding: it just make your life more complicated when reading those)</p> <p><strong>EDIT 2:</strong> </p> <p>Another thing you'll need to do on the Kafka nodes is get the IP and hostnames, and put them in the /etc/hosts file. Kafka seems to need to know the nodes by hostnames, and these are not set within service nodes by default</p> <p><strong>EDIT 3:</strong> </p> <p>After much trial and thoughts using IP as an ID may not be so great: it depends on how you configure storage. for any kind of distributed service like zookeeper, kafka, mongo, hdfs, you might want to use the emptyDir type of storage, so it is just on that node (mounting a remote storage kind of defeats the purpose of distributing these services!) emptyDir will relaod with the data on the same node, so it seems more logical to use the NODE ID (node IP) as the ID, because then a pod that restarts on the same node will have the data. That avoid potential corruption of the data (if a new node starts writing in the same dir that is not actually empty, who knows what can happen) and also with Kafka, the topics being assigned a broker.id, if the broker id changes, zookeeper does not update the topic broker.id and the topic looks like it is available BUT points to the wrong broker.id and it's a mess.</p> <p>So far I have yet to find how to get the node IP though, but I think it's possible to lookup in the API by looking up the service pods names and then the node they are deployed on.</p> <p><strong>EDIT 4</strong></p> <p>To get the node IP, you can get the pod hostname == name from the endpoints API /api/v1/namespaces/default/endpoints/ as explained above. then you can get the node IP from the pod name with /api/v1/namespaces/default/pods/</p> <p>PS: this is inspired by the example in the Kubernetes repo (example for rethinkdb here: <a href="https://github.com/kubernetes/kubernetes/tree/master/examples/rethinkdb" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/tree/master/examples/rethinkdb</a></p>
<p>I started a Kubernetes cluster on AWS using:</p> <pre><code>wget -q -O - https://get.k8s.io | bash </code></pre> <p>I then shutdown the cluster. I then tried restarting it and get the following error:</p> <pre><code>A client error (RouteAlreadyExists) occurred when calling the CreateRoute operation: The route identified by 10.246.0.0/24 already exists. </code></pre> <p>Any ideas?</p>
<p>Looks like the VPC was not correctly deleted. After deleting manually, the installation proceeded as normal.</p>
<p>The next version of CloudFoundry / Diego will offer native support for Docker containers which will be orchestrated across multible hosts [<a href="http://thenewstack.io/docker-on-diego-cloud-foundrys-new-elastic-runtime/" rel="noreferrer">link</a>]. This sounds very similar to Kubernetes.</p> <p>Of course, the problem Kubernetes is trying to solve is more a generic, where CloudFoundry is more focussed on app development. However, for me it sounds both are heading into a similar direction and CloudFoundry is adding a lot more features on top of the plain orchestration.</p> <p>So I'm wondering about use-cases where Kubernetes would add more value than CloudFoundry?</p>
<p>As both a CloudFoundry (past) and Kubernetes (present) commiter, I'm probably uniquely qualified to answer this one.</p> <h2>PaaS-like</h2> <p>I like to call CloudFoundry an "Application PaaS" and Kubernetes a "Container PaaS", but the distinction is fairly subtle and fluid, given that both projects change over time to compete in the same markets.</p> <p>The distinction between the two is that CF has a staging layer that takes a (12-factor) user app (e.g. jar or gem) and a Heroku-style buildpack (e.g. Java+Tomcat or Ruby) and produces a droplet (analogous to a Docker image). CF doesn't expose the containerization interface to the user, but Kubernetes does.</p> <h2>Audience</h2> <p>CloudFoundry's primary audience is enterprise application devs who want to deploy 12-factor stateless apps using Heroku-style buildpacks.</p> <p>Kubernetes' audience is a little broader, including both stateless application and stateful service developers who provide their own containers.</p> <p>This distinction could change in the future:</p> <ul> <li>CloudFoundry could start to accept docker images (<a href="http://lattice.cf/docs/docker-image-examples/">Lattice accepts Docker images</a>).</li> <li>Kubernetes could add an image generation layer (<a href="https://docs.openshift.org/latest/dev_guide/new_app.html#specifying-source-code">OpenShift does something like this</a>).</li> </ul> <h2>Feature Comparison</h2> <p>As both projects mature and compete, their similarities and differences will change. So take the following feature comparison with a grain of salt. </p> <p>Both CF and K8s share many similar features, like containerization, namespacing, authentication, </p> <p>Kubernetes competitive advantages:</p> <ul> <li>Group and scale pods of containers that share a networking stack, rather than just scaling independently</li> <li>Bring your own container</li> <li>Stateful persistance layer</li> <li>Larger, more active OSS community</li> <li>More extensible architecture with replacable components and 3rd party plugins</li> <li>Free web GUI</li> </ul> <p>CloudFoundry competitive advantages:</p> <ul> <li>Mature authentication, user grouping, and multi-tenancy support [x]</li> <li>Bring your own app</li> <li>Included load balancer</li> <li>Deployed, scaled, and kept alive by BOSH [x]</li> <li>Robust logging and metrics aggregation [x]</li> <li>Enterprise web GUI [x]</li> </ul> <p>[x] These features are not part of Diego or included in Lattice.</p> <h2>Deployment</h2> <p>One of CloudFoundry's competitive advantages is that it has a mature deployment engine, BOSH, which enables features like scaling, resurrection and monitoring of core CF components. BOSH also supports many IaaS layers with a pluggable cloud provider abstraction. Unfortunately, BOSH's learning curve and deployment configuration management are nightmarish. (As a BOSH committer, I think I can say this with accuracy.)</p> <p>Kubernetes' deployment abstraction is still in its infancy. Multiple target environments are available in the core repo, but they're not all working, well tested, or supported by the primary developers. This is mostly a maturity thing. One might expect this to improve over time and increase in abstraction. For example, <a href="https://docs.mesosphere.com/services/kubernetes/">Kubernetes on DCOS</a> allows deploying Kubernetes to an existing <a href="https://docs.mesosphere.com/">DCOS</a> cluster with a single command.</p> <h2>Historical Context</h2> <p>Diego is a rewrite of CF's Droplet Execution Agent. It was originally developed before Kubernetes was announced and has taken on more feature scope as the competitive landscape has evolved. Its original goal was to generate droplets (user application + CF buildpack) and run them in Warden (renamed Garden when rewritten in Go) containers. Since its inception it's also been repackaged as <a href="http://lattice.cf/">Lattice</a>, which is somewhat of a CloudFoundry-lite (although that name was taken by an <a href="https://github.com/cloudfoundry/cf-lite">existing project</a>). For that reason, Lattice is somewhat toy-like, in that it has deliberately reduced user audience and scope, explicitly missing features that would make it "enterprise-ready". Features that CF already provides. This is partly because Lattice is used to test the core components, without some of the overhead from the more complex CF, but you can also use Lattice in internal high-trust environments where security and multi-tenancy aren't as much of a concern.</p> <p>It's also worth mentioning that CloudFoundry and Warden (its container engine) predate Docker as well, by a couple years.</p> <p>Kubernetes on the other hand, is a relatively new project that was developed by Google based on years of container usage with BORG and Omega. Kubernetes could be thought of as 3rd generation container orchestration at Google, the same way Diego is 3rd generation container orchestration at Pivotal/VMware (v1 written at VMware; v2 at VMware with Pivotal Labs help; v3 at Pivotal after it took over the project).</p>
<p>I am following the Fedora getting started guide (<a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/fedora/fedora_ansible_config.md">https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/fedora/fedora_ansible_config.md</a>) and trying to run the pod <code>fedoraapache</code>. But <code>kubectl</code> always shows <code>fedoraapache</code> as pending:</p> <pre><code>POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS fedoraapache fedoraapache fedora/apache 192.168.226.144/192.168.226.144 name=fedoraapache Pending </code></pre> <p>Since it is pending, I cannot run <code>kubectl log pod fedoraapache</code>. So, I instead run <code>kubectl describe pod fedoraapache</code>, which shows the following errors:</p> <pre><code> Fri, 20 Mar 2015 22:00:05 +0800 Fri, 20 Mar 2015 22:00:05 +0800 1 {kubelet 192.168.226.144} implicitly required container POD created Created with docker id d4877bdffd4f2a13a17d4cc93c27c1c93d5494807b39ee8a823f5d9350e404d4 Fri, 20 Mar 2015 22:00:05 +0800 Fri, 20 Mar 2015 22:00:05 +0800 1 {kubelet 192.168.226.144} failedSync Error syncing pod, skipping: API error (500): Cannot start container d4877bdffd4f2a13a17d4cc93c27c1c93d5494807b39ee8a823f5d9350e404d4: (exit status 1) Fri, 20 Mar 2015 22:00:15 +0800 Fri, 20 Mar 2015 22:00:15 +0800 1 {kubelet 192.168.226.144} implicitly required container POD created Created with docker id 1c32b4c6e1aad0e575f6a155aebefcd5dd96857b12c47a63bfd8562fba961747 Fri, 20 Mar 2015 22:00:15 +0800 Fri, 20 Mar 2015 22:00:15 +0800 1 {kubelet 192.168.226.144} implicitly required container POD failed Failed to start with docker id 1c32b4c6e1aad0e575f6a155aebefcd5dd96857b12c47a63bfd8562fba961747 with error: API error (500): Cannot start container 1c32b4c6e1aad0e575f6a155aebefcd5dd96857b12c47a63bfd8562fba961747: (exit status 1) Fri, 20 Mar 2015 22:00:15 +0800 Fri, 20 Mar 2015 22:00:15 +0800 1 {kubelet 192.168.226.144} failedSync Error syncing pod, skipping: API error (500): Cannot start container 1c32b4c6e1aad0e575f6a155aebefcd5dd96857b12c47a63bfd8562fba961747: (exit status 1) Fri, 20 Mar 2015 22:00:25 +0800 Fri, 20 Mar 2015 22:00:25 +0800 1 {kubelet 192.168.226.144} failedSync Error syncing pod, skipping: API error (500): Cannot start container 8b117ee5c6bf13f0e97b895c367ce903e2a9efbd046a663c419c389d9953c55e: (exit status 1) Fri, 20 Mar 2015 22:00:25 +0800 Fri, 20 Mar 2015 22:00:25 +0800 1 {kubelet 192.168.226.144} implicitly required container POD created Created with docker id 8b117ee5c6bf13f0e97b895c367ce903e2a9efbd046a663c419c389d9953c55e Fri, 20 Mar 2015 22:00:25 +0800 Fri, 20 Mar 2015 22:00:25 +0800 1 {kubelet 192.168.226.144} implicitly required container POD failed Failed to start with docker id 8b117ee5c6bf13f0e97b895c367ce903e2a9efbd046a663c419c389d9953c55e with error: API error (500): Cannot start container 8b117ee5c6bf13f0e97b895c367ce903e2a9efbd046a663c419c389d9953c55e: (exit status 1) Fri, 20 Mar 2015 22:00:35 +0800 Fri, 20 Mar 2015 22:00:35 +0800 1 {kubelet 192.168.226.144} implicitly required container POD failed Failed to start with docker id 4b463040842b6a45db2ab154652fd2a27550dbd2e1a897c98473cd0b66d2d614 with error: API error (500): Cannot start container 4b463040842b6a45db2ab154652fd2a27550dbd2e1a897c98473cd0b66d2d614: (exit status 1) Fri, 20 Mar 2015 22:00:35 +0800 Fri, 20 Mar 2015 22:00:35 +0800 1 {kubelet 192.168.226.144} implicitly required container POD created Created with docker id 4b463040842b6a45db2ab154652fd2a27550dbd2e1a897c98473cd0b66d2d614 Fri, 20 Mar 2015 21:42:35 +0800 Fri, 20 Mar 2015 22:00:35 +0800 109 {kubelet 192.168.226.144} implicitly required container POD pulled Successfully pulled image "kubernetes/pause:latest" Fri, 20 Mar 2015 22:00:35 +0800 Fri, 20 Mar 2015 22:00:35 +0800 1 {kubelet 192.168.226.144} failedSync Error syncing pod, skipping: API error (500): Cannot start container 4b463040842b6a45db2ab154652fd2a27550dbd2e1a897c98473cd0b66d2d614: (exit status 1) </code></pre>
<p>There are several reasons container can fail to start:</p> <ul> <li><p>the container command itself fails and exits -> check your docker image and start up script to make sure it works. Use <code>sudo docker ps -a</code> to find the offending container and <code>sudo docker logs &lt;container&gt;</code> to check for failure inside the container</p></li> <li><p>a dependency is not there: that happens for example when one tries to mount a volume that is not present, for example <em>Secrets</em> that are not created yet. --> make sure the dependent volumes are created.</p></li> </ul>
<p>Using Kubernetes for deployment:</p> <p>Considering I have a Dockerfile, I build, then push to registry. If I run a container on a host, the image gets pulled and the container is ran.</p> <p>Now, if i update the Dockerfile, build and push again, <strong>without changing its tag</strong> then the image is changed in the registry, but the host has the image pulled, and it doesn't seem to go look for updates.</p> <p>How do i force a pull to get the latest image when running a container? I can manually pull the image, but I'd like to know if there is a 'formal way' of doing this (in the pod or rc templates?)</p> <p>Thanks for insight.</p>
<p>Set an <code>imagePullPolicy</code> of <code>Always</code> on the container</p>
<p>I have several pods, for example a python web app and a redis(shared by other apps), so I need to place redis in a separate pod. But they are all use the same subnet from docker(172.17.0.0/16) or even the same ip address. how can app pods talk with redis pod?</p> <p>Maybe what I want ask is what's the best way to setup multi-host conainer networking.</p> <hr> <p>7 weeks later, I get more familiar with kubernetes. I know kubernetes has network assuming that pods can access each other. so if you app service pod need to access a shared service(redis) pod, you need expose the the shared service as a kubernetes service, then you can get the shared service endpoint from app pods' environment variables or hostname.</p>
<p>When you create a service, the service will proxy the connection to the different pods.</p> <p>A service therefore maintains the list of IPs of the pods' containers.</p> <p>You can then look those up in the API</p> <p>they will be at </p> <pre><code>http(s)://${KUBERNETES_SERVICE_HOST}/api/v1/namespaces/${NAMESPACE}/endpoints/${SERVICE_NAME} </code></pre> <p>NAMESPACE is the name of the namespace. By default it is default, so if you didn't set a namespace in the pod replace with 'default' SERVICE_NAME is your service name KUBERNETES_SERVICE_HOST is an environment variable available in your container.</p> <p>You will get a JSON object with containers and "ip" tags. You can then pipe the answer to a </p> <pre><code>grep '\"ip\"' | awk '{print $2}' | awk -F\" '{print $2}' </code></pre> <p>do isolate the IPs</p> <p>You might also need credentials to reach the https (test it with curl) in Google Cloud, credentials can be found by looking up </p> <pre><code>gcloud cluster-info &lt;your-cluster-name&gt; </code></pre> <p>Note: even if you don't use the service to talk to your pods, it serves the purpose of gathering the IPs for your pods. However note that these may change if the pod get rescheduled somewhere else when a node fails,<br> the Service takes care of maintaining the up-to-date list, but your app needs to pull at intervals or set a watch on the endpoints to keep it's list up to date.</p>
<p>Is there any way to do a rolling-update with a replication controller that has 2 or more containers? </p> <p>For example, I have Jenkins setup to automatically do a rolling update on a rep controller in our dev environment once a successful build takes place using the --image flag specifying the new container's image stored in GCR. This method doesn't work when there are two containers in the same pod and there is no "-c" flag to specify the container you wish to update on the rolling-update command as there is on other commands such as "exec" or "logs". </p> <p>The reason I'm looking to have multiple pods is to implement a logging sidecar as in: <a href="https://github.com/kubernetes/contrib/tree/master/logging/fluentd-sidecar-es" rel="nofollow">https://github.com/kubernetes/contrib/tree/master/logging/fluentd-sidecar-es</a> </p> <p>The only alternative I can think of is to bake the fluentd config into each container, which feels decidedly 'un-kubernetes' to me. </p>
<p>You are right in saying that <code>kubectl rolling-update frontend --image=image:v2</code> does not give you a way to provide more details about a container when updating a pod that has more than one container. It gives you an error <code>Image update is not supported for multi-container pods</code></p> <p>But, it certainly gives you 2 variants </p> <ol> <li><code>kubectl rolling-update frontend-v1 -f frontend-v2.json</code></li> <li><code>cat frontend-v2.json | kubectl rolling-update frontend-v1 -f -</code> </li> </ol> <p>where v1 and v2 can be your build versions. This way, you would also have the advantage of your pod names adhering to your build versions too.</p>
<p>Can someone give an example of how to use the <code>gitRepo</code> type of volume in <strong>Kubernetes</strong>?</p> <p>The doc says it's a plugin, not sure what that means. Could not find an example anywhere and i don't know the proper syntax.</p> <p>especially is there parameters to pull a specific branch, use credentials (username, password, or SSH key) etc...</p> <p>EDIT: Going through the Kubernetes code this is what I figured so far:</p> <pre><code>- name: data gitRepo: repository: "git repo url" revision: "hash of the commit to use" </code></pre> <p>But can't seen to make it work, and not sure how to troubleshoot this issue</p>
<p><strong>UPDATE</strong>:</p> <p>gitRepo is now deprecated</p> <p><a href="https://github.com/kubernetes/kubernetes/issues/60999" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/60999</a></p> <p><strong>ORIGINAL ANSWER</strong>:</p> <p>going through the code this is what i figured:</p> <pre><code>- name: data gitRepo: repository: "git repo url" revision: "hash of the commit to use" </code></pre> <p>after fixing typos in my mountPath, it works fine.</p>
<p>So I'm researching how to use Kubernetes for my case. I installed it and played a bit.</p> <p>The question is when the replication controller starts couple of replicas they have something like an id in their name:</p> <ol> <li>How unique is this id? Is it uniqueness for the lifetime of kubernetes? Is it unique across different kubernetes runs (i.e. if I restart kubernetes)?</li> <li>How to pass this id to the app in the container? Can I specify some kind of template in the yaml so for example the id will be assigned to environment variable or something similar?</li> <li>Alternatively is there a way for the app in the container to ask for this id?</li> </ol> <p>More explanation of the use case. I have an application that writes some session files inside a directory. I want to guarantee unique for the session ids in the system. This means if one app instance is running on VM1 and another instance on VM2, I want to prepend some kind of identifier to the ids like app-1-dajk4l and app-2-dajk4l, where app is the name of the app and 1, 2 is the instance identifier, which should come from the replication controller because it is dynamic and can not be configured manually. dajk4l is some identifier like the current timestamp or similar.</p> <p>Thanks.</p>
<ol> <li><p>The ID is guaranteed to be unique at any single point in time, since Kubernetes doesn't allow two pods in the same namespace to have the same name. There aren't any longer-term guarantees though, since they're just <a href="https://github.com/kubernetes/kubernetes/blob/7c9bbef96ed7f2a192a1318aa312919b861aee00/pkg/api/generate.go#L59" rel="noreferrer">generated as a random string of 5 alphanumeric characters</a>. However, given that there are more than 60 million such random strings, conflicts across time are also unlikely in most environments.</p></li> <li><p>Yes, you can pull in the pod's namespace and name as environment variables using what's called the <a href="https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/downward-api.md" rel="noreferrer">"Downward API"</a>, adding a field on the container like <code> env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name </code></p></li> </ol>
<p>I know how to run a registry mirror</p> <pre><code>docker run -p 5000:5000 \ -e STANDALONE=false \ -e MIRROR_SOURCE=https://registry-1.docker.io \ -e MIRROR_SOURCE_INDEX=https://index.docker.io \ registry </code></pre> <p>and how to use it </p> <pre><code>docker --registry-mirror=http://10.0.0.2:5000 -d </code></pre> <p>But how can I use multiple registry mirror.</p> <p>This is what I need:</p> <ul> <li>Docker hub mirror</li> <li><a href="https://gcr.io" rel="noreferrer">Google container registry</a> mirror for k8s</li> <li>Private registry</li> </ul> <p>So I have to make tow registry mirror and a private registry.I want to <code>docker run registry</code> mirror 1st and 2nd registry, and one more <code>docker run registry</code> hold my private registry. The client will use three of these registry.</p> <p>I have no clue of how to do this,I think this is a common use case, please help, thanks.</p>
<p>You can use a PullSecret to tell Kubernetes what registry to get your containers from. Please see:</p> <p><a href="http://releases.k8s.io/release-1.0/docs/user-guide/images.md#specifying-imagepullsecrets-on-a-pod" rel="nofollow">http://releases.k8s.io/release-1.0/docs/user-guide/images.md#specifying-imagepullsecrets-on-a-pod</a></p>
<p>I have a pod that runs containers which require access to sensitive information like API keys and DB passwords. Right now, these sensitive values are embedded in the controller definitions like so:</p> <pre><code>env: - name: DB_PASSWORD value: password </code></pre> <p>which are then available inside the Docker container as the <code>$DB_PASSWORD</code> environment variable. All fairly easy.</p> <p>But reading their documentation on <a href="http://kubernetes.io/docs/user-guide/secrets/" rel="nofollow noreferrer">Secrets</a>, they explicitly say that putting sensitive configuration values into your definition breaches best practice and is potentially a security issue. The only other strategy I can think of is the following:</p> <ul> <li>create an OpenPGP key per user community or namespace</li> <li>use <a href="https://github.com/xordataexchange/crypt" rel="nofollow noreferrer">crypt</a> to set the configuration value into etcd (which is encrypted using the private key)</li> <li>create a kubernetes secret containing the private key, <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/secrets/secret.yaml" rel="nofollow noreferrer">like so</a></li> <li>associate that secret with the container (meaning that the private key will be accessible as a volume mount), <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/secrets/secret-pod.yaml#L10-L17" rel="nofollow noreferrer">like so</a></li> <li>when the container is launched, it will access the file inside the volume mount for the private key, and use it to decrypt the conf values returned from etcd</li> <li>this can then be incorporated into <a href="https://github.com/kelseyhightower/confd" rel="nofollow noreferrer">confd</a>, which populates local files according to a template definition (such as Apache or WordPress config files)</li> </ul> <p>This seems fairly complicated, but more secure and flexible, since the values will no longer be static and stored in plaintext.</p> <p>So my question, and I know it's not an entirely objective one, is whether this is completely necessary or not? Only admins will be able to view and execute the RC definitions in the first place; so if somebody's breached the kubernetes master, you have other problems to worry about. The only benefit I see is that there's no danger of secrets being committed to the filesystem in plaintext...</p> <p>Are there any other ways to populate Docker containers with secret information in a secure way?</p>
<p>Unless you have many megabytes of config, this system sounds unnecessarily complex. The intended usage is for you to just put each config into a secret, and the pods needing the config can mount that secret as a volume.</p> <p>You can then use any of a variety of mechanisms to pass that config to your task, e.g. if it's environment variables <code>source secret/config.sh; ./mybinary</code> is a simple way.</p> <p>I don't think you gain any extra security by storing a private key as a secret.</p>
<p>Has anyone mounted a private GitHub repository into a kubernetes pod volume?</p> <p>What is the best way to achieve this? I thought of two possible ways:</p> <ol> <li>Using user / password in HTTPS repository URL</li> <li>Using private SSH key on the machine</li> </ol> <p>I like the second better, but I couldn't figure which user is pulling the repository to puts the appropriate SSH configuration for it.</p> <p>Any thoughts?</p>
<p>GitHub allows cloning repositories using an OAuth token in https URLs as such:</p> <p><code>$ git clone https://[email protected]/owner/repo.git </code></p> <p>see <a href="https://help.github.com/articles/creating-an-access-token-for-command-line-use/" rel="nofollow">https://help.github.com/articles/creating-an-access-token-for-command-line-use/</a></p>
<p>I try to <a href="https://cwiki.apache.org/confluence/display/STRATOS/4.1.x+Install+Stratos+with+Kubernetes+in+a+Testing+Environment" rel="nofollow">Install Stratos with Kubernetes in a Testing Environment</a> to build Stratos.I downloading the Kubernetes binaries and provisioned a Docker registry to VAGRANT_KUBERNETES_SETUP folder (in 2.c. i in page).But it gives 3 Failed Units(docker.service,setup-network-environment.service and docker.socket) When I Log into the master node.<br>So I can't view Docker images by using 'docker images' command.when I view docker images it give this error-<br>"FATA[0000] Cannot connect to the Docker daemon. Is 'docker -d' running on this host?" <br> how can i fixed this problem?do i need to install in different way to work with vagrant?</p>
<p>Did you do a <code>sudo -s</code> on the node ? You have to be an admin to connect to the docker daemon and do queries using docker command line client.</p>
<p>I have encountered a scalability problem when trying out the kubernetes cluster. To simplify the topology in my test machine, NodePort type is used to expose the individual service externally. The baremetal to host the node and master is a RHEL 7 with 24 CPUs and 32G RAM; I don't yet have a dedicated load balancer, or a cloud provider like infrastructure. A snippet of the service definition looks just like below</p> <pre><code> "spec": { "ports": [{ "port": 10443, "targetPort": 10443, "protocol": "TCP", "nodePort": 30443 } ], "type": "NodePort", </code></pre> <p>Through this way the application can be accessible via <code>https://[node_machine]:30443/[a_service]</code> </p> <p>Such service is only backed by one Pod. Ideally I would want to have several services deployed on the same node (but using different NodePort's), and and running concurrently.</p> <p>Things were working well until it became evident that for a similar workload, increasing the number of services deployed (therefore backend pods as well) makes the applications degrade in performance. Surprisingly, when breaking down the service loading time, I noticed there's dramatic degradation in 'Connection Time' which seems to indicate there is a slowdown somewhere in the 'network' layer. Please note that the load isn't high enough to drive much of the CPU on the node yet. I read about the <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/services.md#shortcomings" rel="nofollow">shortcomings</a> in the doc, but not sure if what I hit is exactly the limitation of the kube-proxy/Service described there. </p> <p>The questions are:</p> <ol> <li><p>Is there any suggestion on how to make it more scalable? I.e. to be able to support more services/Pods without scarifying the applications' performance? The NodePort type is the easiest way to setup the 'public' address for our services but is there any limitation for scalability or performance if all services and Pods are setting up this way?</p></li> <li><p>Would there be any difference if we change the type to LoadBalancer? "type": "LoadBalancer"</p></li> <li><p>Further more, is there a benefit to have a dedicated LoadBalancer or reverse proxy to improve the scalability, e.g. HAProxy or alike, that routes traffic from external to the backend Pods (or Services)? I noticed there's some work done for Nginx darkgaro/kubernetes-reverseproxy - unfortunately the doc seems incomplete and there's no concrete example. In some of the other threads folks talked about Vulcan - is it the recommended LB tool for kubernetes? </p></li> </ol> <p>Your recommendation and help are highly appreciated!</p>
<p>Hello am kinda new to <a href="https://github.com/kubernetes/kubernetes" rel="nofollow">kubernetes</a> but I have similar questions and concerns. Will try to answer some of them or redirect you to the relevant sections of the user guide.</p> <p>In case you are deploying Kubernetes on a non cloud enabled providers like for example vagrant /local, etc then some features are not currently offered or automated by the platform for u. </p> <p>One of those things is the 'LoadBalancer' type of Service. The automatic provision and assignment of a PUBLIC IP to the service (acting a L.B) happens currently only in platforms like Google Container engine. </p> <p>See issue <a href="https://github.com/kubernetes/kubernetes/issues/7514" rel="nofollow">here</a> and <a href="http://kubernetes.io/v1.0/docs/user-guide/services.html#type-loadbalancer" rel="nofollow">here</a>.</p> <p>The official documentation states</p> <blockquote> <p>On cloud providers which support external load balancers, setting the type field to "LoadBalancer" will provision a load balancer for your Service.</p> </blockquote> <p>Currently an alternative is being developed and documented, see <a href="https://github.com/kubernetes/contrib/tree/master/service-loadbalancer" rel="nofollow">here</a> using <a href="http://www.haproxy.org/" rel="nofollow">HAProxy</a>.</p> <p>Maybe in the near future, kubernetes will eventually support that kind of feature in all the available platforms that can be deployed and operate, so always check their updated features.</p> <p>What you are referring as performance degrade is most probably due to the fact, PublicIP (<a href="http://kubernetes.io/v1.0/docs/user-guide/services.html#type-nodeport" rel="nofollow">NodePort</a> from version 1.0 and onwards) feature is working. Meaning that with the use of NodePort service type, kubernetes assigns a port on ALL nodes of the cluster for this kind of service. Then the kube-proxy intercepts the calls to this ports to the actual service etc.</p> <p>An example on using HaProxy trying to solve the very same problem can be found <a href="http://www.dasblinkenlichten.com/kubernetes-101-external-access-into-the-cluster/" rel="nofollow">here</a>.</p> <p>Hope that helped a bit.</p>
<p>I have a containerized app running on a VM. It consists of two docker containers. The first contains the WebSphere Liberty server and the web app. The second contains PostgreSQL and the app's DB.</p> <p>On my local VM, I just use <strong><em>docker run</em></strong> to start the two containers and then I use <strong><em>docker attach</em></strong> to attach to the web server container so I can edit the server.xml file to specify the public host IP for the DB and then start the web server in the container. The app runs fine.</p> <p>Now I'm trying to deploy the app on Google Cloud Platform.</p> <ol> <li>I set up my gcloud configuration (project, compute/zone).</li> <li>I created a cluster.</li> <li>I created a JSON pod config file which specifies both containers.</li> <li>I created the pod.</li> <li>I opened the firewall for the port specified in the pod config file.</li> </ol> <p>At this point:</p> <ol> <li>I look at the pod (<strong><em>gcloud preview container kubectl get pods</em></strong>), it shows both containers are running.</li> <li>I SSH to the cluster (<strong><em>gcloud compute ssh xxx-mycluster-node-1</em></strong>) and issue <strong><em>sudo docker ps</em></strong> and it shows the database container running, but not the web server container. With <strong><em>sudo docker ps -l</em></strong> I can see the web server container that is not running, but it keeps trying to start and exiting every 10 seconds or so.</li> </ol> <p>So now I need to update the server.xml and start the Liberty server, but I have no idea how to do that in this realm. Can I attach to the web server container like I do in my local VM? Any help would be greatly appreciated. Thanks.</p>
<p>Yes, you can attach to a container in a pod. Using Kubernetes 1.0 issue the following command:</p> <p>Do:</p> <ul> <li><code>kubectl get po</code> to get the POD name</li> <li><code>kubectl describe po POD-NAME</code> to find container name</li> </ul> <p>Then:</p> <p><code>kubectl exec -it POD-NAME -c CONTAINER-NAME bash</code> Assuming you have bash</p> <p>Its similar to <code>docker exec -it CONTAINER-NAME WHAT_EVER_LOCAL_COMMAND</code></p>
<p>Is there a way to pass environment variables through the services in Kubernetes?</p> <p>I tried passing it in to my service yaml like this:</p> <pre><code>apiVersion: v1 kind: Service metadata: labels: name: kafka name: kafka spec: ports: - port: 9092 selector: name: kafka env: - name: BROKER_ID value: "1" </code></pre> <p>The service is being consumed by kubectl, and is created.</p> <p>I've confirmed the service is connected to my container through <code>env | grep KAFKA</code> and the output of variables greatly increase, as expected when my service is up.</p> <p>However, I would like to pass in custom environment-variables that have to be different depending on which instance of the container it is in.</p> <p>Is this possible?</p>
<p>The way that Kubernetes is designed has Services decoupled from Pods. You can not inject a Secret or an env var into a running Pod. What you want is to configure the Pod to use the env var or Secret.</p>
<p>What is the recommended way to upgrade a kubernetes cluster as new versions are released?</p> <p>I heard <a href="https://stackoverflow.com/a/32358515/3045">here</a> it may be <a href="https://github.com/kubernetes/kubernetes/blob/master/cluster/kube-push.sh" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/master/cluster/kube-push.sh</a>. If that is the case how does kube-push.sh relate to <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/gce/upgrade.sh" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/gce/upgrade.sh</a>?</p> <p>I've also heard <a href="https://stackoverflow.com/a/29503904/3045">here</a> that we should instead create a new cluster, copy/move the pods, replication controllers, and services from the first cluster to the new one and then turn off the first cluster.</p> <p>I'm running my cluster on aws if that is relevant.</p>
<p>The second script you reference (gce/upgrade.sh) only works if your cluster is running on GCE. There isn't (yet) an equivalent script for AWS, but you could look at the script and follow the steps (or write them into a script) to get the same behavior. </p> <p>The main different between upgrade.sh and kube-push.sh is that the former does a replacement upgrade (remove a node, create a new node to replace it) whereas the later does an "in place" upgrade. </p> <p>Removing and replacing the master node only works if the persistent data (etcd database, server certificates, authorized bearer tokens, etc) reside on a persistent disk separate from the boot disk of the master (this is how it is configured by default in GCE). Remove and replacing nodes should be fine on AWS (but keep in mind that any pods not under a replication controller won't be restarted). </p> <p>Doing an in-place upgrade doesn't require any special configuration, but that code path isn't as thoroughly tested as the remove and replace option. </p> <p>You shouldn't need to entirely replace your cluster when upgrading to a new version, unless you are using pre-release versions (e.g. alpha or beta releases) which can sometimes have breaking changes between them. </p>
<p>So I have deployed a Kubernetes cluster and installed a private Docker registry. Here is my registry controller:</p> <pre><code>--- apiVersion: v1 kind: ReplicationController metadata: name: registry-master labels: name: registry-master spec: replicas: 1 selector: name: registry-master template: metadata: labels: name: registry-master spec: containers: - name: registry-master image: registry ports: - containerPort: 5000 command: ["docker-registry"] </code></pre> <p>And the service:</p> <pre><code>--- apiVersion: v1 kind: Service metadata: name: registry-master labels: name: registry-master spec: ports: # the port that this service should serve on - port: 5000 targetPort: 5000 selector: name: registry-master </code></pre> <p>Now I sshed to one of Kubernetes' nodes and built a Ruby app container:</p> <pre><code>cd /tmp git clone https://github.com/RichardKnop/sinatra-redis-blog.git cd sinatra-redis-blog docker build -t ruby-redis-app </code></pre> <p>When I try to tag it and push it to the registry:</p> <pre><code>docker tag ruby-redis-app registry-master/ruby-redis-app docker push 10.100.129.115:5000/registry-master/ruby-redis-app </code></pre> <p>I am getting this error:</p> <pre><code>Error response from daemon: invalid registry endpoint https://10.100.129.115:5000/v0/: unable to ping registry endpoint https://10.100.129.115:5000/v0/ v2 ping attempt failed with error: Get https://10.100.129.115:5000/v2/: read tcp 10.100.129.115:5000: connection reset by peer v1 ping attempt failed with error: Get https://10.100.129.115:5000/v1/_ping: read tcp 10.100.129.115:5000: connection reset by peer. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add `--insecure-registry 10.100.129.115:5000` to the daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/10.100.129.115:5000/ca.crt </code></pre> <p>Any idea how to solve it? I have been struggling with this for several hours.</p> <p>Richard</p>
<p>if you're using HTTPS, you must have created a self-signed certificate (with your own CA authority) or you have a CA signed certificate.</p> <p>If so, you need to install this CA cert on the machine you're calling FROM</p> <p>put your CA cert in </p> <pre><code>/etc/ssl/certs </code></pre> <p>and run </p> <pre><code>update-ca-certificates </code></pre> <p>sometimes I have had to put it also in </p> <pre><code>/usr/local/share/ca-certificates/ </code></pre> <p>(in both cases your CA file EXTENSION should be <code>.pem</code> </p> <p>For Docker you may also need to put a file in </p> <pre><code>/etc/docker/certs.d/&lt;--your-site-url---&gt;/ca.crt </code></pre> <p>and the file must be named <code>ca.crt</code> (same file file as the .pem file but named ca.crt)</p>
<p>I'm currently doing some tests on a kubernetes cluster. I was wondering why the pods aren't rescheduled in some cases :</p> <ul> <li>When the node is unreachable</li> <li>When the remote kubelet doesn't answer</li> </ul> <p>Actually the only case when a pod got rescheduled is when the kubelet notify the master. Is it on purpose ? Why ? If i shut down a server where there's a rc with a unique pod running, my service is down.</p> <p>Maybe there's something i misunderstood.</p> <p>Regards, Smana</p>
<p>There is a quite long default timeout for detecting unreachable nodes and for re-scheduling pods, maybe you did not wait long enough? </p> <p>You can adjust the timeouts with several flags:</p> <ul> <li>node-status-update-frequency on the kubelet (<a href="http://kubernetes.io/v1.0/docs/admin/kubelet.html" rel="nofollow">http://kubernetes.io/v1.0/docs/admin/kubelet.html</a>)</li> <li>node-monitor-grace-period and pod_eviction_timeout on the kube-controller-manager (<a href="http://kubernetes.io/v1.0/docs/admin/kube-controller-manager.html" rel="nofollow">http://kubernetes.io/v1.0/docs/admin/kube-controller-manager.html</a>)</li> </ul>
<p>Can someone give an example of how to use the <code>gitRepo</code> type of volume in <strong>Kubernetes</strong>?</p> <p>The doc says it's a plugin, not sure what that means. Could not find an example anywhere and i don't know the proper syntax.</p> <p>especially is there parameters to pull a specific branch, use credentials (username, password, or SSH key) etc...</p> <p>EDIT: Going through the Kubernetes code this is what I figured so far:</p> <pre><code>- name: data gitRepo: repository: "git repo url" revision: "hash of the commit to use" </code></pre> <p>But can't seen to make it work, and not sure how to troubleshoot this issue</p>
<p>This is a sample application I used:</p> <pre><code>{ "kind": "ReplicationController", "apiVersion": "v1", "metadata": { "name": "tess.io", "labels": { "name": "tess.io" } }, "spec": { "replicas": 3, "selector": { "name": "tess.io" }, "template": { "metadata": { "labels": { "name": "tess.io" } }, "spec": { "containers": [ { "image": "tess/tessio:0.0.3", "name": "tessio", "ports": [ { "containerPort": 80, "protocol": "TCP" } ], "volumeMounts": [ { "mountPath": "/tess", "name": "tess" } ] } ], "volumes": [ { "name": "tess", "gitRepo": { "repository": "https://&lt;TOKEN&gt;:[email protected]/tess/tess.io" } } ] } } } } </code></pre> <p>And you can use the revision too. </p> <p>PS: The repo above does not exist anymore. </p>
<p>I am using GKE with kubectl installed from gcloud components. I have created a pv (gcePersistentDisk) with namespace scope using kubectl.</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: cstor-cs-a-disk-david namespace: ns-david spec: gcePersistentDisk: pdName: cstor-cs-a-disk-david fsType: ext4 partition: 0 readOnly: false accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain capacity: storage: 200Gi </code></pre> <p>This says that specifying namespace with create pv is/was valid:<br> <a href="http://kubernetes.io/third_party/swagger-ui/#!/api%2Fv1/createNamespacedPersistentVolume" rel="noreferrer">http://kubernetes.io/third_party/swagger-ui/#!/api%2Fv1/createNamespacedPersistentVolume</a></p> <p>When I run 'kubectl get pv' I see the pv.</p> <pre><code>$ kubectl get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON cstor-cs-a-disk-david &lt;none&gt; 214748364800 RWO Available </code></pre> <p>I did not expect this because the pv wasn't created with the default namespace scope.</p> <p>The same happens if I specify a namespace argument (valid or not).</p> <pre><code>$ kubectl get namespaces NAME LABELS STATUS default &lt;none&gt; Active kube-system &lt;none&gt; Active ns-david &lt;none&gt; Active $ kubectl get pv --namespace=demo NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON cstor-cs-a-disk-david &lt;none&gt; 214748364800 RWO Available </code></pre> <p>If I create a claim against this pv and query it with 'kubectl get pvc' then the claim is not found but is found when I specify the correct namespace.</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cstor-cs-a-disk-claim-david namespace: ns-david spec: accessModes: - ReadWriteOnce resources: requests: storage: 200Gi volumeName: cstor-cs-a-disk-david $ kubectl get pvc NAME LABELS STATUS VOLUME $ kubectl get pvc --namespace=ns-david NAME LABELS STATUS VOLUME cstor-cs-a-disk-claim-david map[] Bound cstor-cs-a-disk-david </code></pre> <p>Do pv have namespace scope or are they global?</p>
<p>PVs, like nodes, are not scoped to any namespace. However, as you <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/#not-all-objects-are-in-a-namespace" rel="noreferrer">noted</a> PVCs, are.</p>
<p>I have a working insecure K8S cluster setup: CoreOS alpha image + Vagrant (custom solution follow K8S getting start guide of scratch setup). Now I want to setup the authentication for K8s Cluster Admins who can access API via <code>kubectl cluster-info</code> command etc. I want to setup something similar like <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/design/access.md#design" rel="nofollow">design doc</a> - Simple profile.</p> <p>Then I followed <a href="https://github.com/kubernetes/kubernetes/blob/968cbbee5d4964bd916ba379904c469abb53d623/docs/admin/authentication.md" rel="nofollow">authentication</a> docs, I picked the <strong>Client certificate authentication</strong> for authentication plugins. </p> <p>I prepared <a href="http://kubernetes.io/v1.0/docs/getting-started-guides/scratch.html#preparing-certs" rel="nofollow">certs</a>, saved <code>/srv/kubernetes/ca.crt</code>, <code>/srv/kubernetes/server.crt</code>, <code>/srv/kubernetes/server.key</code> on the <strong>Master Node</strong>.</p> <p>I also setup the <code>kubeconfig</code> file by following the guide.</p> <pre><code>kubectl config set-cluster $CLUSTER_NAME --certificate-authority=$CA_CERT --embed-certs=true --server=https://$MASTER_IP kubectl config set-credentials $CLUSTER_NAME --client-certificate=$CLI_CERT --client-key=$CLI_KEY --embed-certs=true --token=$TOKEN kubectl config set-context $CLUSTER_NAME --cluster=$CLUSTER_NAME --user=admin kubectl config use-context $CONTEXT --cluster=$CONTEXT </code></pre> <p>When api-server starts, it also use the same value. see <code>$CA_CERT</code>, <code>$CLI_CERT</code>, <code>$CLI_KEY</code>. Q1: are those vlaues in the right place?</p> <pre><code>/kube-apiserver \ --allow_privileged=true \ --bind_address=0.0.0.0 \ --secure_port=6443 \ --kubelet_https=true \ --service-cluster-ip-range=${SERVICE_CLUSTER_IP_RANGE} \ --etcd_servers=$ETCD_SERVER \ --service-node-port-range=${SERVICE_NODE_PORT_RANGE} \ --cluster-name=$CLUSTER_NAME \ --client-ca-file=$CA_CERT \ --tls-cert-file=$CLI_CERT \ --tls-private-key-file=$CLI_KEY \ --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota \ --logtostderr=true </code></pre> <p>Logs are below</p> <pre><code>Aug 30 06:31:30 kube-master docker[3706]: E0830 06:31:30.373083 1 reflector.go:136] Failed to list *api.ResourceQuota: Get http://127.0.0.1:8080/api/v1/resourcequotas: dial tcp 127.0.0.1:8080: connection refused Aug 30 06:31:30 kube-master docker[3706]: E0830 06:31:30.373523 1 reflector.go:136] Failed to list *api.Secret: Get http://127.0.0.1:8080/api/v1/secrets?fieldSelector=type%3Dkubernetes.io%2Fservice-account-token: dial tcp 127.0.0.1:8080: connection refused Aug 30 06:31:30 kube-master docker[3706]: E0830 06:31:30.373631 1 reflector.go:136] Failed to list *api.ServiceAccount: Get http://127.0.0.1:8080/api/v1/serviceaccounts: dial tcp 127.0.0.1:8080: connection refused Aug 30 06:31:30 kube-master docker[3706]: E0830 06:31:30.373695 1 reflector.go:136] Failed to list *api.LimitRange: Get http://127.0.0.1:8080/api/v1/limitranges: dial tcp 127.0.0.1:8080: connection refused Aug 30 06:31:30 kube-master docker[3706]: E0830 06:31:30.373748 1 reflector.go:136] Failed to list *api.Namespace: Get http://127.0.0.1:8080/api/v1/namespaces: dial tcp 127.0.0.1:8080: connection refused Aug 30 06:31:30 kube-master docker[3706]: E0830 06:31:30.373788 1 reflector.go:136] Failed to list *api.Namespace: Get http://127.0.0.1:8080/api/v1/namespaces: dial tcp 127.0.0.1:8080: connection refused Aug 30 06:31:30 kube-master docker[3706]: [restful] 2015/08/30 06:31:30 log.go:30: [restful/swagger] listing is available at https://10.0.2.15:6443/swaggerapi/ Aug 30 06:31:30 kube-master docker[3706]: [restful] 2015/08/30 06:31:30 log.go:30: [restful/swagger] https://10.0.2.15:6443/swaggerui/ is mapped to folder /swagger-ui/ Aug 30 06:31:30 kube-master docker[3706]: I0830 06:31:30.398612 1 server.go:441] Serving securely on 0.0.0.0:6443 Aug 30 06:31:30 kube-master docker[3706]: I0830 06:31:30.399042 1 server.go:483] Serving insecurely on 127.0.0.1:8080 </code></pre> <p>On my <strong>MacOS</strong> machine, I want to connect <code>kubectl</code> to my <strong>$CLUSTER_NAME</strong> cluster.</p> <pre><code>export KUBERNETES_MASTER=http://172.17.8.100:6443 kubectl cluster-info </code></pre> <p>Terminal outputs:</p> <pre><code> ➜ kubectl cluster-info error: couldn't read version from server: Get http://172.17.8.100:6443/api: malformed HTTP response "\x15\x03\x01\x00\x02\x02" </code></pre> <p>Here is my <code>kubeconfig</code> file on <em>MacOS machine</em> <code>~/.kube/config</code></p> <pre><code> ➜ kubectl config view apiVersion: v1 clusters: - cluster: certificate-authority-data: REDACTED server: http://172.17.8.100:6443 name: kube-01 contexts: - context: cluster: kube-01 user: admin name: kube current-context: kube kind: Config preferences: {} users: - name: admin user: client-certificate-data: REDACTED client-key-data: REDACTED token: cxKranwtWI2nyASebbF1HV3p1EWJbNcE </code></pre> <p><strong>Q: How could my <code>kubectl</code> on MacOS to access my K8S cluster securely? since I never add user <code>admin</code> on my api-server, I assume that all authentication is being done by <code>ca-file</code>?</strong></p> <p><strong>Q: Once I fix the secure login issue, how could I update the <code>admission-control</code> plugins api error issue like <code>ServiceAccount</code> connection refuse above?</strong></p> <p><strong>Q: Do I use <code>http</code> or <code>https</code>? I prefer to use <code>http://IP:6443</code>, not sure it is the problem?</strong></p> <p><strong>Q: Do I need to apply <code>--token-auth-file=</code> or <code>--basic-auth-file</code>? By reading the Docs, I think I could pick one of the method for authentication. I would prefer to do it in <code>ca</code> which is more secure, right?</strong></p> <hr> <p>I used <code>see function create-certs in cluster/gce/util.sh</code> to generate my <code>certs</code> files. I am not too familiar to <code>certs</code> and <code>keys</code>, so that I post them here. Well, it is really a dummy <code>certs</code> and <code>keys</code> for testing. It is not being used anywhere. Simply posted here to varify if I did something wrong here. </p> <p><strong>ca.crt</strong></p> <p>-----BEGIN CERTIFICATE----- MIIDWTCCAkGgAwIBAgIJAMbTBaUcQSbGMA0GCSqGSIb3DQEBCwUAMCIxIDAeBgNV BAMMFzE3Mi4xNy44LjEwMEAxNDQwNzgwMjgxMB4XDTE1MDgyODE2NDQ0MVoXDTI1 MDgyNTE2NDQ0MVowIjEgMB4GA1UEAwwXMTcyLjE3LjguMTAwQDE0NDA3ODAyODEw ggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDNmT0O8sBXTd2Htbb+hnsq P/YvUNYTXzLy6+T/d9/KRrxq1JWO70E7L2hFOvOdGF0gZuoAefki5ymkFYfwoZsK NEXvA1AxBMtQnMCdUOp7m5XW+c9uFepW+jzvb4PRBoUHZjW5HhxT6UZ21FiEvwHP NBnCL9gp1NIcNOaUIZvFI7hpko0tfAPFYY0NkHRo6mLpvzaGTippzySMSLyQ7cs4 IcUrFGJbsTNISCSsCG//+A6I62sQAURr0hjeW9FmGHxwYW+0wdyyTtlFPTKrVrC4 ETc5WeQoJeZhjoH7Dkj8l6QBvv2cDtZwnY2oCUGXf63c3hoRaEkeFis1RWQcQKoT AgMBAAGjgZEwgY4wHQYDVR0OBBYEFONIYbWt3l9D5j9VvJADUQfmIBpQMFIGA1Ud IwRLMEmAFONIYbWt3l9D5j9VvJADUQfmIBpQoSakJDAiMSAwHgYDVQQDDBcxNzIu MTcuOC4xMDBAMTQ0MDc4MDI4MYIJAMbTBaUcQSbGMAwGA1UdEwQFMAMBAf8wCwYD VR0PBAQDAgEGMA0GCSqGSIb3DQEBCwUAA4IBAQCJtrf1Mf+pHwCsMG8HPcuR4oij ugYkzawEF2FSCe2VbFMDxwmHbHw2N9ZOwRLyeSuR0JAY5aN31pqIzYCmmKf2otKU +mtTaK5YIsZU2IdxoR6VHaHT83zSGq9RhteqDdM8tuMvNsV5I9pJCu+Bkv3MsJpN 0PIc+GFs52A+bQC3cjWqLkgJeYEqolNnJpeex9G3ovqbTzavgM8q5gjdTyz8tDIo Dc4RKcuwyrAnkiJ93HdWLwkKcEXzrX/lU9NYsvmycBVbkRaIh7md82HCUiwkmmJC Xz3+xVrghzMo0DgoInzxcPFRWPc00CZcb5P5VRepa2rPwEyNgEp3BsQLXFIt -----END CERTIFICATE-----</p> <p><strong>server.crt</strong></p> <pre><code>Certificate: Data: Version: 3 (0x2) Serial Number: 1 (0x1) Signature Algorithm: sha256WithRSAEncryption Issuer: CN=172.17.8.100@1440780281 Validity Not Before: Aug 28 16:44:41 2015 GMT Not After : Aug 25 16:44:41 2025 GMT Subject: CN=kube-master Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (2048 bit) Modulus (2048 bit): 00:ab:3f:cf:95:50:3d:7f:b4:82:ba:72:7a:88:2e: 41:79:67:7d:9a:4a:22:27:5f:fd:5c:78:6f:3d:ad: 57:4c:fd:37:9e:b5:35:f1:88:59:c1:e9:10:38:3e: de:7f:57:cf:e9:fc:fd:d7:b5:a8:7a:0e:5f:e4:16: 6f:2a:66:98:28:6c:42:a8:5f:95:3d:0b:02:f2:ec: ab:aa:19:40:60:b3:e5:7a:64:7d:5b:f2:9c:84:d5: bb:06:79:e7:00:2f:2c:a0:0a:88:f4:b0:c5:31:de: 7d:30:d6:b3:4d:ea:64:85:bb:f9:89:5a:f5:22:41: 92:35:d4:a4:7d:80:64:65:d9:1d:c9:30:39:af:34: 57:cd:d5:56:5d:9f:35:5d:ee:a3:07:ed:f1:c5:68: db:db:12:65:31:e6:6c:1e:77:44:3e:7c:03:bc:89: f0:4c:14:a6:41:39:22:a3:a3:a0:8d:20:eb:69:7a: c5:de:b0:2f:94:67:68:ab:8c:8a:24:59:38:a4:57: 19:2d:c2:0e:37:c8:73:98:ae:d8:0a:a4:e2:72:22: 49:9a:55:58:ad:8e:c3:eb:42:b5:41:02:c9:40:27: d1:77:41:ab:4f:0b:2a:6b:b2:b6:38:7f:a0:ce:cf: 9f:cd:7c:54:72:c6:43:cd:1d:5b:60:b9:45:eb:10: ab:ad Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: CA:FALSE X509v3 Subject Key Identifier: B2:46:5F:5A:68:3E:08:78:25:8C:AE:5E:EB:F1:3B:7B:CF:9D:A6:F3 X509v3 Authority Key Identifier: keyid:E3:48:61:B5:AD:DE:5F:43:E6:3F:55:BC:90:03:51:07:E6:20:1A:50 DirName:/CN=172.17.8.100@1440780281 serial:C6:D3:05:A5:1C:41:26:C6 X509v3 Extended Key Usage: TLS Web Server Authentication X509v3 Key Usage: Digital Signature, Key Encipherment X509v3 Subject Alternative Name: IP Address:172.17.8.100, IP Address:10.100.0.1, DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, DNS:kube-master Signature Algorithm: sha256WithRSAEncryption 58:b1:63:41:3e:94:ed:3d:bd:3c:e8:0c:78:30:54:c1:6d:33: 00:42:74:c8:7a:64:cc:fd:9a:70:ab:38:5b:1c:92:7c:9b:56: 1a:d7:fd:38:51:07:cf:5a:b5:0a:11:85:01:3d:52:86:96:ad: 16:be:ea:9c:2c:ee:3c:14:c9:5b:58:d7:ab:45:ae:d8:e0:2d: 70:7c:55:40:44:b8:98:ad:1b:d4:66:35:c5:78:13:4c:e7:5a: de:82:15:43:cb:bb:83:3a:09:04:fa:5e:6f:d9:ca:17:b8:40: 00:b0:ba:06:ed:73:ed:c8:c7:5a:53:aa:d3:43:a2:f1:c2:cf: 14:9b:c2:7b:b7:c0:2a:56:a0:53:2e:af:2d:07:65:c0:70:c1: 92:86:34:05:39:3c:ed:3f:6e:f9:31:7f:de:5a:ed:9b:c8:83: e0:f4:9c:de:c7:9c:04:be:d2:6e:8d:5e:3e:ad:46:d4:82:70: 9d:79:b9:c3:dd:b4:c0:6e:1b:23:d0:45:be:26:c6:7e:4c:ec: c5:c3:c9:ee:1e:93:d4:a5:11:e9:6a:1d:e1:ee:af:eb:83:e6: dd:ec:13:7b:45:60:18:f5:05:3f:61:7b:3c:2b:b1:28:c4:92: 5e:bc:67:c0:02:22:a9:aa:69:d5:e9:0e:75:80:36:b2:66:84: fe:05:c2:75 -----BEGIN CERTIFICATE----- MIID3DCCAsSgAwIBAgIBATANBgkqhkiG9w0BAQsFADAiMSAwHgYDVQQDDBcxNzIu MTcuOC4xMDBAMTQ0MDc4MDI4MTAeFw0xNTA4MjgxNjQ0NDFaFw0yNTA4MjUxNjQ0 NDFaMBYxFDASBgNVBAMMC2t1YmUtbWFzdGVyMIIBIjANBgkqhkiG9w0BAQEFAAOC AQ8AMIIBCgKCAQEAqz/PlVA9f7SCunJ6iC5BeWd9mkoiJ1/9XHhvPa1XTP03nrU1 8YhZwekQOD7ef1fP6fz917Woeg5f5BZvKmaYKGxCqF+VPQsC8uyrqhlAYLPlemR9 W/KchNW7BnnnAC8soAqI9LDFMd59MNazTepkhbv5iVr1IkGSNdSkfYBkZdkdyTA5 rzRXzdVWXZ81Xe6jB+3xxWjb2xJlMeZsHndEPnwDvInwTBSmQTkio6OgjSDraXrF 3rAvlGdoq4yKJFk4pFcZLcION8hzmK7YCqTiciJJmlVYrY7D60K1QQLJQCfRd0Gr Twsqa7K2OH+gzs+fzXxUcsZDzR1bYLlF6xCrrQIDAQABo4IBJzCCASMwCQYDVR0T BAIwADAdBgNVHQ4EFgQUskZfWmg+CHgljK5e6/E7e8+dpvMwUgYDVR0jBEswSYAU 40hhta3eX0PmP1W8kANRB+YgGlChJqQkMCIxIDAeBgNVBAMMFzE3Mi4xNy44LjEw MEAxNDQwNzgwMjgxggkAxtMFpRxBJsYwEwYDVR0lBAwwCgYIKwYBBQUHAwEwCwYD VR0PBAQDAgWgMIGABgNVHREEeTB3hwSsEQhkhwQKZAABggprdWJlcm5ldGVzghJr dWJlcm5ldGVzLmRlZmF1bHSCFmt1YmVybmV0ZXMuZGVmYXVsdC5zdmOCJGt1YmVy bmV0ZXMuZGVmYXVsdC5zdmMuY2x1c3Rlci5sb2NhbIILa3ViZS1tYXN0ZXIwDQYJ KoZIhvcNAQELBQADggEBAFixY0E+lO09vTzoDHgwVMFtMwBCdMh6ZMz9mnCrOFsc knybVhrX/ThRB89atQoRhQE9UoaWrRa+6pws7jwUyVtY16tFrtjgLXB8VUBEuJit G9RmNcV4E0znWt6CFUPLu4M6CQT6Xm/Zyhe4QACwugbtc+3Ix1pTqtNDovHCzxSb wnu3wCpWoFMury0HZcBwwZKGNAU5PO0/bvkxf95a7ZvIg+D0nN7HnAS+0m6NXj6t RtSCcJ15ucPdtMBuGyPQRb4mxn5M7MXDye4ek9SlEelqHeHur+uD5t3sE3tFYBj1 BT9hezwrsSjEkl68Z8ACIqmqadXpDnWANrJmhP4FwnU= -----END CERTIFICATE----- </code></pre> <p><strong>server.key</strong></p> <p>-----BEGIN RSA PRIVATE KEY----- MIIEpAIBAAKCAQEAqz/PlVA9f7SCunJ6iC5BeWd9mkoiJ1/9XHhvPa1XTP03nrU1 8YhZwekQOD7ef1fP6fz917Woeg5f5BZvKmaYKGxCqF+VPQsC8uyrqhlAYLPlemR9 W/KchNW7BnnnAC8soAqI9LDFMd59MNazTepkhbv5iVr1IkGSNdSkfYBkZdkdyTA5 rzRXzdVWXZ81Xe6jB+3xxWjb2xJlMeZsHndEPnwDvInwTBSmQTkio6OgjSDraXrF 3rAvlGdoq4yKJFk4pFcZLcION8hzmK7YCqTiciJJmlVYrY7D60K1QQLJQCfRd0Gr Twsqa7K2OH+gzs+fzXxUcsZDzR1bYLlF6xCrrQIDAQABAoIBAAtfMWm46lyQoB3B fGGOsMpfFPgp9BqpRSne1YRC/okeR5NCdVKUu2ElGO6jPiM2sZfYNQMeDRIN4lBD LR6jsXb9uW906XQkRw3aqYuiIaRKTfLSuYBhnAM2LjU/4xcgCtaV3IJjOrUVETst Brsl1YcL9IYqhBzCPfNVK5cp74DTzleBjl7ng1y8ijGOTcp5JwUbrrQQZ0U9uqjS nCAjB63e8x7JswXx1jo4pDeumJzyJ1eHNA0oXwSbgZ/q/oUHHYykUrFkPYIIAMKu lZO/Lh2tRNdDf8lXupWmhfcwDO9DYcRr4v37hnDqknWWHEdgR9hborc6vZYAMpPB 0LrIfAECgYEA0rT7bFDCCBmk5yDw2cOl1CHT1BTq7Elw2cjAGgjAygx0puGKuBnr qBYeAQqx3ZZHlMsiT3gSbRP9CLws+QgSUf87deM0kBoiWG6m+KgSxmBIMRJCdo+S c+3QZwWLBFHQLaJCDRN4XNr1HuHzcKYO4th/SpDZ3lQc9wO7S3dBHpsCgYEA0A+B ogw30zf1rIaIv8rRMOItqA6pgR6DbspAYexZyEKUexsvHOw6KMDRz7IwzZRVUkjI uPfEkq3qAhYpEgzi/BIsnj/Ku91THkzkkDBolpuJAa068GupQgbLCLhKWa1h7qrI mAFOxy+9ZIFWbmy4UDaqgT5O78gw1CFwibYXn1cCgYEAlDPX5AepcikXY7o3rfN+ 4AYrCDDuS+QcDBK3i5g8geDg68AX4gXZSxDDadgr4r+g+XcnWt4Jl89HWq2AtGiI +kObfv+gKPs4zRqHNr6A9icin+FH/jxdtky/GLc9YHxrAK3v52KadjVL07z5jXI/ Zi8A2WGo3EgtV1C4nAv1MaECgYAp0GP6IEB754wtLyB+gxFFpL8OPlwcgfhiJK2J wIlOsOrMTutKAcOyewXvmt0qA7yd+9izK8BKxj74SmHYqdRYWoKzDxj8Zn+U4Fkz DTeHxRxkxN7KgKiUh274gqkWmrzKzXHg8qpVZ6fFciTfrmPgYwwjS1Vr5SzDBTFr y7e1owKBgQDMKHPuEE9LT3ljiZFIoU6yxbWU/+rMaJwqmV5bEXbfrL06PjTw7kp/ UnLHJ3TVdCXnY2J4Si39cYAhL5Wr5JiubviaW5zCjjOXbrE3ck16kkJsS8DOXjHT nHNGV48GE51THWl/NbuRQz/rD9McsCwixNm66C2EiakKuKLuv3tI3Q== -----END RSA PRIVATE KEY-----</p>
<p>I think you may have the exact same issue that I just solved. I believe it was you who asked a similar question on #google-containers and something that user "vishh" said fixed the problem for me. Make sure your master IP/hostname is in the certificate that you use for your api server under the cert's <code>Subject Alternative Name:</code> section.</p> <pre><code>$ openssl x509 -in kube-apiserver-server.pem -text -noout Certificate: Data: Version: 3 (0x2) Serial Number: 1d:60:b0:98:70:95:23:f8 Signature Algorithm: sha256WithRSAEncryption ... ... ... X509v3 Subject Alternative Name: DNS:*.kubestack.io, DNS:*.c.kubestack.internal, IP Address:127.0.0.1, IP Address:192.168.10.50 </code></pre> <p>I have very little experience with certs and keys so I used the guide <a href="https://github.com/kelseyhightower/docker-kubernetes-tls-guide" rel="nofollow">here</a> to generate mine.</p> <p>You shouldn't need to use any other auth flags (token/basic), it's done through the certs as you assumed.</p> <p>You need to use http<strong>s</strong> when specifying the server.</p> <p>I'm unsure about your admission-control question.</p> <p>Hope this helps.</p>
<p>Cannot get <code>kubectl</code> bash completion to work on Mac. I installed <code>bash-completion</code> with <code>brew</code> i.e. <code>brew install bash-completion</code> then as per the install notes, added it to the <code>~/.bash_profile</code>:</p> <p><code>if [ -f $(brew --prefix)/etc/bash_completion ]; then . $(brew --prefix)/etc/bash_completion fi </code></p> <p>But still getting the error <code>kubectl -bash: _init_completion: command not found</code> after re-sourcing the profile.</p>
<p>A patch was provided to fix the issue with bash completion for kubectl. You'll need to use the patch for now if you are on osx until the PR is merged and source the bash completion again, you will also need to install <code>bash-completion</code> as described in the question above.</p> <p><a href="https://github.com/kubernetes/kubernetes/issues/12400#issuecomment-137567835" rel="nofollow">kubectl patch</a></p> <p><a href="https://github.com/kubernetes/kubernetes/pull/13570" rel="nofollow">kubernetes pull request</a></p> <p><strong>Edit:</strong> this was now merged into <a href="https://github.com/kubernetes/kubernetes/blob/master/contrib/completions/bash/kubectl" rel="nofollow">master</a></p>
<p>I'm trying to get a ghost blog deployed on GKE, working off of the <a href="https://cloud.google.com/container-engine/docs/tutorials/persistent-disk/" rel="nofollow">persistent disks with WordPress tutorial</a>. I have a working container that runs fine manually on a GKE node:</p> <pre><code>docker run -d --name my-ghost-blog -p 2368:2368 -d us.gcr.io/my_project_id/my-ghost-blog </code></pre> <p>I can also correctly create a pod using the following method from another tutorial:</p> <pre><code>kubectl run ghost --image=us.gcr.io/my_project_id/my-ghost-blog --port=2368 </code></pre> <p>When I do that I can curl the blog on the internal IP from within the cluster, and get the following output from <code>kubectl get pod</code>:</p> <pre><code>Name: ghosty-nqgt0 Namespace: default Image(s): us.gcr.io/my_project_id/my-ghost-blog Node: very-long-node-name/10.240.51.18 Labels: run=ghost Status: Running Reason: Message: IP: 10.216.0.9 Replication Controllers: ghost (1/1 replicas created) Containers: ghosty: Image: us.gcr.io/my_project_id/my-ghost-blog Limits: cpu: 100m State: Running Started: Fri, 04 Sep 2015 12:18:44 -0400 Ready: True Restart Count: 0 Conditions: Type Status Ready True Events: ... </code></pre> <p><strong>The problem arises when I instead try to create the pod from a yaml file,</strong> per the Wordpress tutorial. Here's the yaml:</p> <pre><code>metadata: name: ghost labels: name: ghost spec: containers: - image: us.gcr.io/my_project_id/my-ghost-blog name: ghost env: - name: NODE_ENV value: production - name: VIRTUAL_HOST value: myghostblog.com ports: - containerPort: 2368 </code></pre> <p>When I run <code>kubectl create -f ghost.yaml</code>, the pod is created, but is never ready:</p> <pre><code>&gt; kubectl get pod ghost NAME READY STATUS RESTARTS AGE ghost 0/1 Running 11 3m </code></pre> <p>The pod continuously restarts, as confirmed by the output of <code>kubectl describe pod ghost</code>:</p> <pre><code>Name: ghost Namespace: default Image(s): us.gcr.io/my_project_id/my-ghost-blog Node: very-long-node-name/10.240.51.18 Labels: name=ghost Status: Running Reason: Message: IP: 10.216.0.12 Replication Controllers: &lt;none&gt; Containers: ghost: Image: us.gcr.io/my_project_id/my-ghost-blog Limits: cpu: 100m State: Running Started: Fri, 04 Sep 2015 14:08:20 -0400 Ready: False Restart Count: 10 Conditions: Type Status Ready False Events: FirstSeen LastSeen Count From SubobjectPath Reason Message Fri, 04 Sep 2015 14:03:20 -0400 Fri, 04 Sep 2015 14:03:20 -0400 1 {scheduler } scheduled Successfully assigned ghost to very-long-node-name Fri, 04 Sep 2015 14:03:27 -0400 Fri, 04 Sep 2015 14:03:27 -0400 1 {kubelet very-long-node-name} implicitly required container POD created Created with docker id dbbc27b4d280 Fri, 04 Sep 2015 14:03:27 -0400 Fri, 04 Sep 2015 14:03:27 -0400 1 {kubelet very-long-node-name} implicitly required container POD started Started with docker id dbbc27b4d280 Fri, 04 Sep 2015 14:03:27 -0400 Fri, 04 Sep 2015 14:03:27 -0400 1 {kubelet very-long-node-name} spec.containers{ghost} created Created with docker id ceb14ba72929 Fri, 04 Sep 2015 14:03:27 -0400 Fri, 04 Sep 2015 14:03:27 -0400 1 {kubelet very-long-node-name} spec.containers{ghost} started Started with docker id ceb14ba72929 Fri, 04 Sep 2015 14:03:27 -0400 Fri, 04 Sep 2015 14:03:27 -0400 1 {kubelet very-long-node-name} implicitly required container POD pulled Pod container image "gcr.io/google_containers/pause:0.8.0" already present on machine Fri, 04 Sep 2015 14:03:30 -0400 Fri, 04 Sep 2015 14:03:30 -0400 1 {kubelet very-long-node-name} spec.containers{ghost} started Started with docker id 0b8957fe9b61 Fri, 04 Sep 2015 14:03:30 -0400 Fri, 04 Sep 2015 14:03:30 -0400 1 {kubelet very-long-node-name} spec.containers{ghost} created Created with docker id 0b8957fe9b61 Fri, 04 Sep 2015 14:03:40 -0400 Fri, 04 Sep 2015 14:03:40 -0400 1 {kubelet very-long-node-name} spec.containers{ghost} created Created with docker id edaf0df38c01 Fri, 04 Sep 2015 14:03:40 -0400 Fri, 04 Sep 2015 14:03:40 -0400 1 {kubelet very-long-node-name} spec.containers{ghost} started Started with docker id edaf0df38c01 Fri, 04 Sep 2015 14:03:50 -0400 Fri, 04 Sep 2015 14:03:50 -0400 1 {kubelet very-long-node-name} spec.containers{ghost} started Started with docker id d33f5e5a9637 ... </code></pre> <p>This cycle of created/started goes on forever, if I don't kill the pod. The only difference from the successful pod is the lack of a replication controller. I don't expect this is the problem because the tutorial mentions nothing about rc.</p> <p>Why is this happening? How can I create a successful pod from config file? And where would I find more verbose logs about what is going on?</p>
<p>If the same docker image is working via <code>kubectl run</code> but not working in a pod, then something is wrong with the pod spec. Compare the full output of the pod as created from spec and as created by rc to see what differs by running <code>kubectl get pods &lt;name&gt; -o yaml</code> for both. Shot in the dark: is it possible the env vars specified in the pod spec are causing it to crash on startup?</p>
<p>Kubernetes volume support is listed here <a href="https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/volumes.md" rel="nofollow">https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/volumes.md</a> , however it does not currently meet my needs. </p> <p>I need to support object storage, both Openstack Swift API and S3 compatible API. (k8s does support AWS directly but I can't use it to connect to a different private object storage that uses the s3 API). </p> <p>Is there any way to extend the functionality to support these two object storage APIs? I need to be able to mount from object storage into pods.</p> <p>EDIT: For now I don't have to support swift API, just the S3 API. Keep in mind it's not actually AWS storage, it's merely using S3 compatible API</p>
<p>I have been thinking of ways to enable swift as a volume plugin. Volume plugins for cloud block storage (EBS, cinder, persistent disk) are straight forward when compared to object storage. The main difference being block storage disks can be provisioned and attached to the vm on which kubelet is running, which can be mounted into the container. After which, it will behave like local file system and does not need any extra care. Read only mounts of object storage are also fairly straight forward and the functionality can be similar to <a href="https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/volumes.md#gitrepo" rel="nofollow">gitRepo</a>.</p> <p>On the other hand, writing back to object storage gets tricky. 2 ways come to mind: </p> <ol> <li>Some sort user space file system plugin which will map to the remote system.</li> <li>Use a side car container whose sole purpose would be to sync a particular dir to object storage system.</li> </ol> <p>Obviously both of the approaches would be significantly slow in terms of performance which will probably be directly proportional to network bandwidth.</p>
<p>Does Kubernetes download Docker image automatically when i create a pod or should I use Docker pull manually to Download the image locally?</p>
<p>You do not need to run <code>docker pull</code> manually. The <a href="http://kubernetes.io/v1.0/docs/user-guide/walkthrough/README.html#pod-definition" rel="nofollow">pod definition</a> contains the image name to pull and Kubernetes will pull the image for you. You have several options in terms of defining how Kubernetes will decide to pull the image, using the <code>imagePullPolicy:</code> definition in your pod spec. Much of this is documented <a href="http://kubernetes.io/v1.0/docs/user-guide/images.html" rel="nofollow">here</a>, but basically you can pull if the image is not present, pull always, never update (once the image is local). Hopefully that doc can get you started.</p>
<p>I am running a Kubernetes Cluster on Google Container Engine with the default SkyDNS implementation enabled.</p> <p>How can I add custom DNS Entries in Google Container Engine? Is there any way to:</p> <ol> <li>Add custom DNS Entries to SkyDNS that will persist if the SkyDNS container is restarted?</li> <li>Change the default DNS Settings on my nodes to a custom DNS Server, which in-turn will forward to SkyDNS?</li> <li>Change the Forward DNS of SkyDNS to my custom DNS Server?</li> </ol> <p>Edit: If trying to resolve a Google Compute Engine VM from inside a container, the default DNS Server that Google Container Engine uses will resolve VM Names in the formats:</p> <pre><code>&lt;vm-name&gt;.c.&lt;project-name&gt;.internal &lt;vm-name&gt;.&lt;project-id&gt;.google.internal </code></pre>
<p>At the moment there is not any API to directly manipulate DNS within the cluster. It's something we want to do but have not tackled yet.</p> <p>Can you explain what you're hoping to achieve?</p> <p>Edit: if you want to run Consul, nothing is stopping you. Our DNS server is just one implementation.</p>
<p>As I know, Google's Kubernetes is based on Google's Borg; however, it seems like Borg is larger than Kubernetes. My understanding is that Borg is a large system containing a sub-system like Kubernetes and its own containers like Docker.</p> <p>So, I would like to know:</p> <p>1) In term of containers cluster management, what's the key difference between Borg (sub-system inside) and Kubernetes?</p> <p>2) In term of container technology, what's the key difference between Borg (sub-system inside) and Docker?</p>
<p>I have no 'inside' knowledge of Borg so this answer is based only on what Google themselves have published <a href="http://research.google.com/pubs/pub43438.html">here</a>. For much greater detail, you should look into that paper. Section 8 makes specific reference to Kubernetes and is the basis of this answer (along with Kubernetes own docs):</p> <p>1) Key differences:</p> <ul> <li>Borg groups work by 'job'; Kubernetes adds 'labels' for greater flexibility.</li> <li>Borg uses an IP-per-machine design; Kubernetes uses a network-per-machine and IP-per-Pod design to allow late-binding of ports (letting developers choose ports, not the infrastructure).</li> <li>Borg's API seems to be extensive and rich, but with a steep learning curve; Kubernetes APIs are presumably simpler. At least, for someone who hasn't worked with Borg, the Kubernetes API seems pretty clean and understandable.</li> </ul> <p>2) Borg seems to use <a href="https://github.com/google/lmctfy">LMCTFY</a> as its container technology. Kubernetes allows the use of Docker or rkt.</p> <p>Some other obvious differences are the Borg is not open source and not available for use outside of Google, while Kubernetes is both of those things. Borg has been in production use for more than 10 years, while Kubernetes just hit v1.0 in July 2015.</p> <p>Hope this helps. Check out that Borg paper; it is worth the time to read the whole thing.</p>
<p>I have a kubernetes RC/pod consisting of containers with images like: <code>foobar/my-image:[branch]-latest</code> where "branch" is the git branch ("master", etc).</p> <p>What's the best way to use rolling-update to force the RC to re-pull the images to get the latest version? The brute force method is to simply delete the RC and re-create it, but that causes downtime for the service.</p> <p>Is rolling update only possible if you specify an exact image tag, rather than something like "latest"?</p>
<p>You should be able to use a <a href="http://kubernetes.io/v1.0/docs/user-guide/kubectl/kubectl_rolling-update.html" rel="noreferrer">rolling update</a> specifying the same image name that you are currently using:</p> <pre><code>kubectl rolling-update &lt;replication-controller-name&gt; --image=foobar/myimage:[branch]-latest </code></pre> <p>This will (behind the scenes) create a new replication controller that is a copy of your existing replication controller with the "new" image, and then stepwise resize each of the replication controllers until the old one has zero pods and the new one has the desired number of pods, finally deleting the old one and renaming the new one to use the old name. </p>
<p>I've the a lot of error logs reported by kubelet :</p> <pre><code>Sep 07 09:43:51 kubenode-1 kubelet[10320]: I0907 09:43:51.651224 10320 container.go:369] Failed to update stats for container "/docker/01ad0eff434033752c1f39944e9965e38a07081fcbfe26dc35358bb63be18082": failed to read stat from "/sys/class/net/veth2fc2d33/statistics/rx_bytes" for device "veth2fc2d33", continuing to Sep 07 09:43:56 kubenode-1 kubelet[10320]: I0907 09:43:56.051022 10320 container.go:369] Failed to update stats for container "/": failed to read stat from "/sys/class/net/calic1976c4e52f/statistics/rx_bytes" for device "calic1976c4e52f", continuing to push stats </code></pre> <p>I don't know what is the problem exactly. Please find below further information</p> <pre><code>docker info Containers: 27 Images: 121 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 175 Dirperm1 Supported: true Execution Driver: native-0.2 Kernel Version: 3.16.0-4-amd64 Operating System: Debian GNU/Linux 8 (jessie) CPUs: 2 Total Memory: 2.95 GiB Name: kubenode-1 ID: LXO4:TD3E:ZAL5:AUWE:PN6W:KFZX:S4QR:AX6V:776M:VHVT:7Z3O:O72V Username: smaine Registry: [https://index.docker.io/v1/] kubelet --version Kubernetes v1.0.3 </code></pre> <p>I'm running the kubernetes cluster on debian jessie.</p> <p>Furthermore i have these errors :</p> <pre><code>Sep 07 09:37:19 kubenode-1 kubelet[10320]: W0907 09:37:19.148453 10320 manager.go:1161] No ref for pod '814c1a33de45655e8cff2044485913ab568b9ab858ed2c5aa30d0034b82a6660' Sep 07 09:37:29 kubenode-1 kubelet[10320]: W0907 09:37:29.265237 10320 manager.go:1161] No ref for pod '06a3e276f8b3dca0c3ea20b5feee4ab9b5ee97ef44aad1aef2f0102d5ddfa40c' Sep 07 09:37:31 kubenode-1 kubelet[10320]: W0907 09:37:31.065126 10320 manager.go:1161] No ref for pod '686039f754799616a0957d3fcc516bc46b9601a2c6ac304a010397142fc06cd0' Sep 07 09:37:31 kubenode-1 kubelet[10320]: W0907 09:37:31.075288 10320 kubelet.go:1343] Orphaned volume "c2f1c714-52f8-11e5-b108-5254000ec1d7/influxdb-persistent-storage" found, tearing down volume Sep 07 09:37:31 kubenode-1 kubelet[10320]: W0907 09:37:31.914188 10320 kubelet.go:1343] Orphaned volume "c2f1c714-52f8-11e5-b108-5254000ec1d7/default-token-v2j1u" found, tearing down volume Sep 07 09:38:49 kubenode-1 kubelet[10320]: E0907 09:38:49.224758 10320 kubelet.go:682] Image garbage collection failed: failed to find information for the filesystem labeled "docker-images" </code></pre> <p>I'm using calico as network overlay but i don't think it's related. Please let me know if you need more info.</p> <p>Regards,</p> <p>Smana</p>
<p>If you use Docker v1.7 or above, you may have run into a cadvisor bug, where network stats are not collected correctly. See <a href="https://github.com/kubernetes/kubernetes/issues/13189" rel="nofollow">https://github.com/kubernetes/kubernetes/issues/13189</a></p> <p>The bug has already being fixed in cadvisor, and kubernetes will include the fix soon.</p>
<p>I'm searching for a main difference between OpenShift V3 and V2. Is OpenShift V2 working like this?: <a href="https://www.openshift.com/walkthrough/how-it-works" rel="nofollow">https://www.openshift.com/walkthrough/how-it-works</a> And how are Docker and Kubernetes working in V3?</p> <p>Can someone give me a clear explanation about the build-up of OpenShift V2 and V3</p>
<p>This is a rather broadly asked question, so I will (and can) answer only in a rather broad manner.</p> <p>There are a lot of key concepts that have changed. These are the most important ones and you'll need some time to get into it, but they are a big improvement to OpenShift v2.:</p> <ul> <li>Cartridges vs. Docker Containers </li> <li>Gears vs. Kubernetes Pods </li> <li>Broker vs. Kubernetes Master </li> <li>Release of <a href="https://access.redhat.com/articles/rhel-atomic-getting-started" rel="noreferrer">Red Hat Enterprise Linux Atomic Host</a></li> </ul> <p>When you'll study the links below you will understand, that (really exaggerated) OpenShift v3 has basically nothing to do with v2 besides the name, the logo and the PaaS focus. But it's still a great tool and IMO has set new standards in the PaaS-world. (No, I don't work for RedHat ;)</p> <p>What's New:</p> <p><a href="https://docs.openshift.com/enterprise/3.0/whats_new/overview.html" rel="noreferrer">https://docs.openshift.com/enterprise/3.0/whats_new/overview.html</a> <a href="https://docs.openshift.com/enterprise/3.0/architecture/overview.html" rel="noreferrer">https://docs.openshift.com/enterprise/3.0/architecture/overview.html</a></p> <p>For starters; Docker &amp; Kubernetes:</p> <p><a href="https://blog.openshift.com/openshift-v3-platform-combines-docker-kubernetes-atomic-and-more/" rel="noreferrer">https://blog.openshift.com/openshift-v3-platform-combines-docker-kubernetes-atomic-and-more/</a></p> <p>Pretty new:</p> <p><a href="https://access.redhat.com/articles/1353773" rel="noreferrer">Creating a Kubernetes Cluster to Run Docker Formatted Container Images</a></p> <p>EDIT 2016_06_30: Sorry for necro'ing this old post, but I wanted to add this quick, fun and <em>very</em> informative video about Kubernetes: <a href="https://youtu.be/4ht22ReBjno" rel="noreferrer">https://youtu.be/4ht22ReBjno</a></p>
<p>Do sporadic disk cleanup operations happen automatically in Kubernetes or should this be a scheduled "docker RM/RMI" to remove discarded images? A single node in my dev K8s cluster (other nodes are fine) keeps running out of disk space in / and the following message is show in the K8s UI events:</p> <p>reason: freeDiskSpaceFailed</p> <p>Message: failed to garbage collect required amount of images. Wanted to free 2069743207, but freed 0</p>
<p>Yes,Kubernetes supports container and image garbage collection. More details in <a href="https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/</a></p> <p>You may want to check whether that particular node has enough disk space allocated to host the images for the pods assigned to it.</p>
<p>On GKE, <strong>kube-dns</strong> is running on my nodes, I can see the docker containers.</p> <p>I do have access to <strong>Services</strong> by name, which is great for all these applications where load balancing is a perfectly suitable solution, but <em>how would I use the DNS to access individual pods?</em> </p> <p>I know I can look up specific pods in the API, but I need to update the <code>hosts</code> file myself, and keep watching the pod list. DNS is supposed to do that for me so how is it meant to be used within a pod?</p> <p>The Kubernetes doc says the DNS info needs to be passed to the kubelet but I have no access to that on GKE that I know of, so is it just not setup that way on GKE or is there something to do to activate it? </p> <p>Some of my services (zookeeper in particular) is aware of other nodes on its own, and tries to connect to them by host name (that is pod name) and that fails unless I update the <code>hosts</code> file myself. I would like to use the integrated DNS service for that.</p> <p>Any guidance on how to do this would be appreciated.</p> <p>Thanks</p>
<p><strong>UPDATE</strong></p> <p>According to the docs, the format is now:</p> <p><code>_my-port-name._my-port-protocol.my-svc.my-namespace.svc.cluster.local</code></p> <p>See the related doc here: <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pods" rel="noreferrer">https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pods</a></p> <p><strong>ORIGINAL ANSWER:</strong></p> <p>as of this date, this is actually not possible... but it is being looked at by the Kubernetes team.</p> <p>See this issue : <a href="https://github.com/kubernetes/kubernetes/issues/13552" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/13552</a></p> <p><strong>UPDATE</strong>:</p> <p>DNS is available for Pods since 09/2015 See PR:<a href="https://github.com/kubernetes/kubernetes/pull/13759" rel="noreferrer">https://github.com/kubernetes/kubernetes/pull/13759</a></p> <p>in short: </p> <blockquote> <p>This will give pods dns in the form of <code>&lt;podIP&gt;.&lt;namespace&gt;.pod.&lt;clusterSuffix&gt;</code> Currently can be disabled, but is either on for all pods or off.</p> </blockquote>
<p>How can I find out if a given Docker image can be run using Kubernetes? </p> <p>What should I do to help ensure that my images will run well in any Kubernetes-managed environment?</p>
<p>All Docker images can be run on Kubernetes -- it uses Docker to run the images.</p> <p>You can expose ports from containers just like when using Docker directly, pass in environment variables, mount storage volumes from the host into the container, and more.</p> <p>If you have anything particular in mind, I'd be interested in hearing about any image you find that can't be run using Kubernetes.</p>
<p>Is it possible to specify a <a href="http://kubernetes.io/v1.0/docs/user-guide/labels.html#set-based-requirement" rel="nofollow">set-based label selector</a> for a replication controller? I cannot figure out the syntax to do so in the request json. I can't find anything in the documentation, so if you have a link to the appropriate documentation, that would be helpful.</p>
<p>This is something we want to support, and is/was underway (see PR 7053), but it is not yet possible. </p> <p>You can observe the status/progress on: <a href="https://github.com/kubernetes/kubernetes/issues/341" rel="nofollow">https://github.com/kubernetes/kubernetes/issues/341</a></p> <p>It's possible to work around the lack of this feature by creating a new label that would match the selector you'd like and then create a trivial selector that matches that new label.</p> <p>FYI, you can see the API specification for ReplicationController's spec here: <a href="http://kubernetes.io/v1.0/docs/api-reference/definitions.html#_v1_replicationcontrollerspec" rel="nofollow">http://kubernetes.io/v1.0/docs/api-reference/definitions.html#_v1_replicationcontrollerspec</a></p> <p>The schema is listed as "any", but it's actually a map of string to string, like labels.</p>
<p>After running Kubernetes on AWS for a few days, my master node goes dead. This has happened after setting up 2 different clusters. The pods are still running and available, but there's no way to manage / proxy.</p> <p>Question is why? Or alternatively, how do I replace the master node on AWS? Or alternatively, how do I debug the existing one? Or alternatively, how do I use something other than a t2.micro, which may be too small to run master?</p> <p>Symptom: $ kubectl get pods error: couldn't read version from server: Get https://**.###.###.###/api: dial tcp **.###.###.###:443: connection refused</p> <p>Edit: This is what I found after further debugging:</p> <pre><code>goroutine 571 [running]: net/http.func·018() /usr/src/go/src/net/http/transport.go:517 +0x2a net/http.(*Transport).CancelRequest(0xc2083c0630, 0xc209750d00) /usr/src/go/src/net/http/transport.go:284 +0x97 github.com/coreos/go-etcd/etcd.func·003() /go/src/github.com/GoogleCloudPlatform/kubernetes/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/requests.go:159 +0x236 created by github.com/coreos/go-etcd/etcd.(*Client).SendRequest /go/src/github.com/GoogleCloudPlatform/kubernetes/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/requests.go:168 +0x3e3 goroutine 1 [IO wait, 12 minutes]: net.(*pollDesc).Wait(0xc20870e760, 0x72, 0x0, 0x0) /usr/src/go/src/net/fd_poll_runtime.go:84 +0x47 net.(*pollDesc).WaitRead(0xc20870e760, 0x0, 0x0) /usr/src/go/src/net/fd_poll_runtime.go:89 +0x43 net.(*netFD).accept(0xc20870e700, 0x0, 0x7f4424a42008, 0xc20930a168) /usr/src/go/src/net/fd_unix.go:419 +0x40b net.(*TCPListener).AcceptTCP(0xc20804bec0, 0x5bccce, 0x0, 0x0) /usr/src/go/src/net/tcpsock_posix.go:234 +0x4e net/http.tcpKeepAliveListener.Accept(0xc20804bec0, 0x0, 0x0, 0x0, 0x0) /usr/src/go/src/net/http/server.go:1976 +0x4c net/http.(*Server).Serve(0xc20887ec60, 0x7f4424a66dc8, 0xc20804bec0, 0x0, 0x0) /usr/src/go/src/net/http/server.go:1728 +0x92 net/http.(*Server).ListenAndServe(0xc20887ec60, 0x0, 0x0) /usr/src/go/src/net/http/server.go:1718 +0x154 github.com/GoogleCloudPlatform/kubernetes/cmd/kube-apiserver/app.(*APIServer).Run(0xc2081f0e00, 0xc20806e0e0, 0x0, 0xe, 0x0, 0x0) /go/src/github.com/GoogleCloudPlatform/kubernetes/_output/dockerized/go/src/github.com/GoogleCloudPlatform/kubernetes/cmd/kube-apiserver/app/server.go:484 +0x264a main.main() /go/src/github.com/GoogleCloudPlatform/kubernetes/_output/dockerized/go/src/github.com/GoogleCloudPlatform/kubernetes/cmd/kube-apiserver/apiserver.go:48 +0x154 </code></pre>
<p>It's almost certainly that the initial size of machine was too low, and ran out of memory (or something similar). To use a larger cluster size, follow this link[1] and set an environment variable before you bring up your cluster.</p> <p>In this case, something like:</p> <pre><code>export MINION_SIZE=t2.large </code></pre> <p>Should run forever.[2]</p> <p>[1] <a href="http://kubernetes.io/docs/getting-started-guides/aws/" rel="nofollow">http://kubernetes.io/docs/getting-started-guides/aws/</a></p> <p>[2] Or reasonable approximation thereof. :)</p>
<p>I'm having quite a bit of fun with the gcePersistentDisk in the context of pods inside of kubernetes:</p> <p>Currently I'm attempting to get my rethinkdb cluster to work well with a gcePersistentDisk mounted volume in order to facilitate backups, data recovery, data integrity, etc. This is proving a bit more difficult than I originally anticipated. So, I have a few questions:</p> <p>1: Should I even be attempting to use the gcePersistentDisk for this use case? Or should I be using persistentVolumes, and using the file system/persistentVolumes on my host kubelets in order to persist the data, and only using gcePersistentDisk when I'm doing a backup?</p> <p>2: [EDIT: FIGURED OUT]</p> <p>3: Pretty sure this is just a bug, but if you attempt to scale up a pod with a gcePersistentDisk mounted as a volume, it does not throw the usual:</p> <blockquote> <p>'The ReplicationController "rethinkdb" is invalid:spec.template.spec.volumes.GCEPersistentDisk.ReadOnly: invalid value 'false': ReadOnly must be true for replicated pods > 1, as GCE PD can only be mounted on multiple machines if it is read-only.'</p> </blockquote> <p>, but rather just hangs on the command line and loops forever when I view the kublet's logs.</p> <p>4: Am I going completely in the wrong direction for solving this issue? And if so, how do I persist the DB data from my pods?</p>
<p>Unfortunately I don't know anything about rethinkdb, but it's very reasonable to use a gcePersistentDisk to store the data. That way if the Kubernetes node running your pod dies, the pod can be restarted on another node (assuming you have more than one node in your Kubernetes cluster) and continue to access the data when it comes back up. I don't think there's any reason you need to use persistent volumes here; straight-up GCEPersistentDisk as the VolumeSource should be fine.</p> <p>I'm not sure why you're losing your data when you scale the RC down to 0 and back up to 1. My understanding is that the PD should be re-mounted.</p>
<p>I have a kubernetes cluster running on coreos. I wish to run journal2gelf <a href="https://github.com/systemd/journal2gelf" rel="noreferrer">https://github.com/systemd/journal2gelf</a> in a container in a pod I call logging. (I also have a fluentd container in this pod, which works great, I highly recommend it for streaming logs elsewhere). Is it possible to configure a pod to allow essentially this:</p> <pre><code>journalctl -o json -f | docker run &lt;my journal2gelf image&gt; -d -p $GRAYLOG_PORT </code></pre> <p>but within the <code>containers:</code> key in a replication controller config? And in general can kubernetes allow piping to a container?</p>
<p>This will let you send stdin to a container:</p> <p><code>kubectl exec -i POD_NAME COMMAND</code></p> <p>Or</p> <p><code>kubectl attach -i POD_NAME</code></p> <p>But there isn't a good way to define to stdin sent to all containers in a pod, or all containers spawned by a replication controller</p>