Recently, while migrating a mesos/marathon cluster to kubernetes, it was observed that the response time on kubernetes was substantially slow when compared to that of marathon.
So, what can be compared out of the two that can give further insight into the issue.
In both the setups, docker was the being used to run and manage images.
Therefore, docker container inspect was used. After comparing, output of multiple containers from both the orchestrators, it was found that there was difference in CpuQuota values.
First let’s see what are CpuQuota and CpuShares
CpuQuota – Upper limit of CPU time that will be given to the container to be utilized within a cpu-period(default 100 ms). After a process has consumed its cpu quota, it will be throttled until the next cpu-period. A value of 0 implies no limit.
CpuShares – Relative CPU time weightage given to a container to be utilized within a cpu-period. This value is of significance only in case when different processes/containers are competing for CPU. In absence of CPU contention, all the CPU time would be available for the process in question. For example, 4 containers competing for CPU on a host would get CPU time proportionate to their defined cpushares values.
CpuQuota had a value of 0(zero) for mesos/marathon setup. Whereas for kubernetes, there were finite values defined for this parameter. Upon removing that limit from the kubernetes configuration, the response times started matching that of mesos cluster.
More generally, setting cpus on docker is better than setting CpuQuota. A value of cpus 1.5 would translate to CpuQuota of 1.5 and cpu-period of 100 ms(default).
But what in kubernetes gets mapped to docker’s CpuQuota and CpuShares?
Below mentioned kubernetes yaml specs are used for defining these values.
resources.limits.cpu – Translates to docker’s cpuQuota
resources.requests.cpu – Translates to docker’s cpuShares