web-dev-qa-db-ja.com

MicroK8Sの下でHelmを実行すると、「エラー:準備ができた耕うん機ポッドが見つかりませんでした」

Kubernetes、Helm、conjure-upについて学び、Eclipe-Cheをインストールする必要があります。
[Ubuntu 18.04.2 Server X64]のフレッシュインストールで、vmwareワークステーション内部で仮想マシンとして実行しています。MicroK8SとHelmをインストールしています。

その新鮮なUbuntuのインストール0nlyスクリプトブロックのIMペースト端末は:

Sudo apt-get update
Sudo apt-get upgrade
Sudo snap install microk8s --classic
microk8s.kubectl version
alias kubectl='microk8s.kubectl'
alias docker='microk8s.docker'
kubectl describe nodes | egrep 'Name:|Roles:|Taints:'
kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl get nodes
Sudo snap install helm --classic
kubectl create serviceaccount tiller --namespace kube-system
kubectl create clusterrolebinding tiller-cluster-rule \
            --clusterrole=cluster-admin \
            --serviceaccount=kube-system:tiller
helm init --service-account=tiller
helm version
helm ls
kubectl get po -n kube-system 

ターミナルの各出力を含む上記のスクリプトブロックは次のとおりです。

myUser@myServer:~$ Sudo snap install microk8s --classic
microk8s v1.13.4 from Canonical✓ installed
[1]+  Done                    sleep 10

myUser@myServer:~$ microk8s.kubectl version
Client Version: version.Info { 
    Major:"1", Minor:"13", GitVersion:"v1.13.4", 
    GitCommit:"c27b913frrr1a6c480c287433a087698aa92f0b1", 
    GitTreeState:"clean", BuildDate:"2019-02-28T13:37:52Z", 
    GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/AMD64"}
    The connection to the server 127.0.0.1:8080 was 
      refused - did you specify the right Host or port?

myUser@myServer:~$ alias kubectl='microk8s.kubectl'

myUser@myServer:~$ alias docker='microk8s.docker'

myUser@myServer:~$ kubectl describe nodes | egrep 'Name:|Roles:|Taints:'
The connection to the server 127.0.0.1:8080 was 
     refused - did you specify the right Host or port?

myUser@myServer:~$ kubectl taint nodes --all \
         node-role.kubernetes.io/master-
The connection to the server 127.0.0.1:8080 was 
     refused - did you specify the right Host or port?

myUser@myServer:~$ kubectl get nodes
The connection to the server 127.0.0.1:8080 was 
        refused - did you specify the right Host or port?

myUser@myServer:~$ Sudo snap install helm --classic
helm 2.13.0 from Snapcrafters installed

myUser@myServer:~$ kubectl create serviceaccount tiller \
              --namespace kube-system
Error from server (NotFound): namespaces "kube-system" not found

myUser@myServer:~$ kubectl create clusterrolebinding \
             tiller-cluster-rule \
             --clusterrole=cluster-admin \
             --serviceaccount=kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-rule created

myUser@myServer:~$ helm init --service-account=tiller
Creating /home/myUser/.helm 
Creating /home/myUser/.helm/repository 
Creating /home/myUser/.helm/repository/cache 
Creating /home/myUser/.helm/repository/local 
Creating /home/myUser/.helm/plugins 
Creating /home/myUser/.helm/starters 
Creating /home/myUser/.helm/cache/archive 
Creating /home/myUser/.helm/repository/repositories.yaml 
Adding stable repo with URL: 
   https://kubernetes-charts.storage.googleapis.com 
Adding local repo with URL: http://127.0.0.1:8879/charts 
$HELM_HOME has been configured at /home/myUser/.helm.
Tiller (the Helm server-side component) has been 
        installed into your Kubernetes Cluster.
Please note: by default, Tiller is deployed with an 
        insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with 
        the --tiller-tls-verify flag.
For more information on 
   securing your installation see: 
   https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

myUser@myServer:~$ helm version
Client: &version.Version { 
   SemVer:"v2.13.0",
   GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", 
   GitTreeState:"clean"}
Error: could not find tiller

myUser@myServer:~$ helm ls
Error: could not find tiller

myUser@myServer:~$ kubectl get po -n kube-system 
No resources found.

127.0.0.1:8080でも接続が拒否されていることを確認できるように、@ aureliusの助けを借りて上記のスクリプトを改善しましたが、まだ同じエラーが発生しています。

エラー:ティラーポッドが見つかりませんでした

そして、上記のように fixstackoverflow を実行しました。

issue がGithubで開かれ、上記のfixを指し、解決済みとして閉じられますが、問題は解決しません。

問題は、召喚アップと統合されないLXDのスナップバージョンにあると言っている人が1人います。彼はaptパッケージからLXDをインストールするように言っており、彼の完全な説明はここにあります https://askubuntu.com/a/959771
私はそれがうまくいくかどうかを確かめるためにそれを試して、ここに戻ってきます。

1
Mark

必要だったのは:

helm repo update

ここにコマンドの完全なセット:

# Ensure there disk space to install all
Sudo apt-get update
Sudo apt-get upgrade
Sudo apt-get dist-upgrade
Sudo dpkg-reconfigure tzdata
Sudo snap remove lxc
Sudo snap remove lxd
Sudo apt-get remove --purge lxc 
Sudo apt-get remove --purge lxd 
Sudo apt-get autoremove
# can throw error, ensure each purgue/uninstall above
Sudo apt-add-repository ppa:ubuntu-lxc/stable
Sudo apt-get update
Sudo apt-get upgrade
Sudo apt-get dist-upgrade
Sudo apt-get install tmux lxc lxd zfsutils-linux 
df -h => 84% Free, 32G
{ SNAPSHOT - beforeLxdInit }
lxd init
    ipv6:none
ifconfig | grep flags
Sudo sysctl -w net.ipv6.conf.ens33.disable_ipv6=1  
Sudo sysctl -w net.ipv6.conf.lo.disable_ipv6=1  
Sudo sysctl -w net.ipv6.conf.lxcbr0.disable_ipv6=1  
Sudo sysctl -w net.ipv6.conf.lxdbr0.disable_ipv6=1  
time Sudo snap install conjure-up --classic
{ SNAPSHOT - beforeConjureUp }
conjure-up => CHOICE = { microk8s }
alias kubectl='microk8s.kubectl'
#------------------------------------
# not necessary enable all but its a test
microk8s.enable storage
microk8s.enable registry    
microk8s.enable dns dashboard ingress istio metrics-server prometheus fluentd jaeger
#------------------------------------
time Sudo snap install helm --classic
helm init
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
helm search
# Before update the repo it throw an error:
helm version
    Error: could not find a ready tiller pod 
# Then update the repo:
helm repo update
# After update the repo it was OK:
helm version
    Client: &version.Version { 
            SemVer:"v2.13.0", 
            GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6",
            GitTreeState:"clean"
        }
    Server: &version.Version { 
            SemVer:"v2.13.0", 
            GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", 
            GitTreeState:"clean" 
        }
#------------------------------------
helm install stable/mysql
df -h | grep sda {
    Filesystem:/dev/sda2,
    Size:40G,  
    Used:12G,  
    Avail:26G, 
    Use%:31% 
    Mounted-on:/
    }
{ SNAPSHOT - afterFixErrorBeforeEclipseChe }
#------------------------------------
========================================================================================================================
# Looks like it added a messy OverlayFS
df -h
    Filesystem      Size  Used Avail Use% Mounted on
    udev            1.9G     0  1.9G   0% /dev
    tmpfs           393M  2.5M  390M   1% /run
    /dev/sda2        40G   12G   26G  31% /
    tmpfs           2.0G     0  2.0G   0% /dev/shm
    tmpfs           5.0M     0  5.0M   0% /run/lock
    tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup
    /dev/loop0       91M   91M     0 100% /snap/core/6350
    tmpfs           393M     0  393M   0% /run/user/1000
    tmpfs           100K     0  100K   0% /var/lib/lxd/shmounts
    tmpfs           100K     0  100K   0% /var/lib/lxd/devlxd
    /dev/loop1      110M  110M     0 100% /snap/conjure-up/1045
    /dev/loop2      205M  205M     0 100% /snap/microk8s/492
    shm              64M     0   64M   0% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes$
    overlay          40G   12G   26G  31% /var/snap/microk8s/common/run/containerd/io.containerd.runtime.v1.linux/k8s.$
    overlay          40G   12G   26G  31% /var/snap/microk8s/common/run/containerd/io.containerd.runtime.v1.linux/k8s.$
    shm              64M     0   64M   0% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes$
    overlay          40G   12G   26G  31% /var/snap/microk8s/common/run/containerd/io.containerd.runtime.v1.linux/k8s.$
    shm              64M     0   64M   0% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes$
    overlay          40G   12G   26G  31% /var/snap/microk8s/common/run/containerd/io.containerd.runtime.v1.linux/k8s.$
    shm              64M     0   64M   0% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes$
    overlay          40G   12G   26G  31% /var/snap/microk8s/common/run/containerd/io.containerd.runtime.v1.linux/k8s.$
    shm              64M     0   64M   0% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes$
    overlay          40G   12G   26G  31% /var/snap/microk8s/common/run/containerd/io.containerd.runtime.v1.linux/k8s.$
    shm              64M     0   64M   0% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes$
    overlay          40G   12G   26G  31% /var/snap/microk8s/common/run/containerd/io.containerd.runtime.v1.linux/k8s.$
    overlay          40G   12G   26G  31% /var/snap/microk8s/common/run/containerd/io.containerd.runtime.v1.linux/k8s.$
    shm              64M  4.7M   60M   8% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes$
    overlay          40G   12G   26G  31% /var/snap/microk8s/common/run/containerd/io.containerd.runtime.v1.linux/k8s.$
    shm              64M  4.7M   60M   8% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes$
    overlay          40G   12G   26G  31% /var/snap/microk8s/common/run/containerd/io.containerd.runtime.v1.linux/k8s.$
========================================================================================================================

kubectl run eclipseche --image=Eclipse/che-server:nightly
    deployment.apps/eclipseche2 created
    ------------------------------------
    # Cant found a way to follow the advise below, cant find the equivalent syntax
    kubectl run --generator=deployment/apps.v1 
    is DEPRECATED and will be removed in a future version. 
    Use 
    kubectl run --generator=run-pod/v1 
    or 
    kubectl create instead

kubectl get pods
    NAME                                      READY   STATUS    RESTARTS   AGE
    brown-hyena-mysql-75f584d69d-rbfv4        1/1     Running   0          72m
    default-http-backend-5769f6bc66-z7jb4     1/1     Running   0          91m
    eclipseche-589954dc99-d4bxm               1/1     Running   0          6m13s
    nginx-ingress-microk8s-controller-p88nm   1/1     Running   0          91m

kubectl get svc
    NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
    brown-hyena-mysql      ClusterIP   10.152.184.38   <none>        3306/TCP   74m
    default-http-backend   ClusterIP   10.152.184.99   <none>        80/TCP     93m
    kubernetes             ClusterIP   10.152.184.1    <none>        443/TCP    99m

microk8s.kubectl describe pod eclipseche-589954dc99-d4bxm | grep "IP:"
    IP:  10.1.1.54

Sudo apt-get install net-tools nmap

nmap 10.1.1.54 | grep open
    8080/tcp open  http-proxy
2
Mark

Tillerのサービスアカウントがないため、エラーが発生します。あなたは以下を実行することでそれを達成できます:

kubectl create serviceaccount tiller --namespace kube-system

kubectl create clusterrolebinding tiller-cluster-rule \ --clusterrole=cluster-admin \ --serviceaccount=kube-system:tiller

helm init --service-account=tiller

Tiller here の詳細をご覧ください。

0
aurelius

以下でこれを試してください:

$ microk8s.enable helm

$ microk8s.helm init --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | microk8s.kubectl apply -f -
0
Som P