저작자 표시 변경 금지
신고
Posted by OpenStack Korea 부대표 Seungkyu Ahn(seungkyua@gmail.com

OpenStack Day Korea 2017 에서 발표한 자료




저작자 표시 변경 금지
신고
Posted by OpenStack Korea 부대표 Seungkyu Ahn(seungkyua@gmail.com

## OpenStack Foundation 사용자 등록



## launchpad 에 사용자 등록 (OpenStack Foundation email 과 동일해야 함)

## launchpad.net 사용자 id 확인 (자신의 id 로 조회되는지 확인)
https://launchpad.net/~seungkyua


## review 사이트에 사용자 등록


## review 사이트에서 필요한 정보 등록
1. Profile 메뉴에서 Username 등록
2. Contact Information 에서 아래 처럼 날짜 업데이트 되었는지 확인 (안되어 있으면 정보 입력)
   Contact information last updated on May 25, 2015 at 12:51 PM.
3. SSH Public Keys 등록
   $ cat ~/.ssh/id_rsa.pub
4. Agreements 서명





[ stackalytics 에 추가 ]
$ mkdir -p ~/Documents/git && cd ~/Documents/git
$ git clone ssh://seungkyu@review.openstack.org:29418/openstack/stackalytics
$ cd stackalytics


## git 및 git-review 설치
$ brew install git git-review


## 환경 설정 (gitreview.username 은 review 사이트의 Profile Username 임)
$ git config --add gitreview.username "seungkyu"
git config --add user.name "Seungkyu Ahn"
git config --add user.email "seungkyua@gmail.com"




## 접속 테스트 및 commit-msg hook 다운로드
$ git review -s




## 개인 추가 (launchpad_id 의 abc 순), end_date: null 은 하나 밖에 못씀
## launchpad_id 만 필수, 나머지 id 는 옵션
$ git checkout -b seungkyua
$ vi etc/default_data.json
        {
            "launchpad_id": "seungkyua",
            "gerrit_id": "seungkyu",
            "github_id": "seungkyua",
            "companies": [
                {
                    "company_name": "Samsung SDS",
                    "end_date": "2015-Feb-28"
                },
                {
                    "company_name": "OpenStack Korea User Group",
                    "end_date": "2016-Dec-31"
                },
                {
                    "company_name": "SK telecom",
                    "end_date": null
                }
            ],
            "user_name": "Seungkyu Ahn",
            "emails": ["ahnsk@sk.com", "seungkyua@gmail.com"]
        },




## companies 항목에 회사명이 없을 때는 추가해야 함
25785         {
25786             "domains": ["sktelecom.com"],
25787             "company_name": "SK telecom",
25788             "aliases": ["SKT", "SKTelecom"]
25789         },



$ git commit -a

## commit message 는 아래와 같이
modify personal info about seungkyua


## commit message 작성법
첫번째 라인은 50자 이내로 간단히 요약을 쓴다.
[공백라인]
설명을 적되 라인은 72자가 넘어가면 다음 라인에 쓴다.



## review 올리기
$ git review



## git review 시 Change-Id 세팅 에러가 나면 화면 에러 대로 수행
$ gitdir=$(git rev-parse --git-dir); scp -p -P 29418 seungkyu@review.openstack.org:hooks/commit-msg ${gitdir}/hooks/

$ git commit --amend
$ git review


## 확인










저작자 표시 변경 금지
신고
Posted by OpenStack Korea 부대표 Seungkyu Ahn(seungkyua@gmail.com

ip link 로 device 를 namespace 로 보내면 Host 에서 해당 device 는 볼 수 가 없다.

이를 다시 Host 로 원복하는 방법



## Host 서버의 eno2 를 qrouter namespace 에 넣고 ip 세팅
$ sudo ip netns
qrouter-68cfc511-7e75-4b85-a1ca-d8a09c489ccc
qdhcp-03a6de58-9693-4c41-9577-9307c8750141

## eno2 를 네임스페이스로 보내기
$ sudo ip link set eno2 netns qrouter-68cfc511-7e75-4b85-a1ca-d8a09c489ccc
$ sudo ip netns exec qrouter-68cfc511-7e75-4b85-a1ca-d8a09c489ccc ip a
4: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 3c:a8:2a:20:ed:d1 brd ff:ff:ff:ff:ff:ff

$ sudo ip netns exec qrouter-68cfc511-7e75-4b85-a1ca-d8a09c489ccc ip addr add 192.168.130.100/24 dev eno2
$ sudo ip netns exec qrouter-68cfc511-7e75-4b85-a1ca-d8a09c489ccc ifconfig eno2 up



## qrouter namespace 에 있는 eno2 를 Host 로 다시 돌려 보내기

$ ip netns exec qrouter-68cfc511-7e75-4b85-a1ca-d8a09c489ccc ip link set eno2 netns 1










저작자 표시 변경 금지
신고
Posted by OpenStack Korea 부대표 Seungkyu Ahn(seungkyua@gmail.com
## https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/cephfs

[ ceph-admin 노드에서 ]
$ ssh ceph@192.168.30.22

## kubes Pool 생성
$ ceph osd pool create kubes 128

## kube user 생성
$ ceph auth get-or-create client.kube mon 'allow r' \
osd 'allow class-read object_prefix rbd_children, allow rwx pool=kubes'


[client.kube]
    key = AQCt/BpYigJ7MRAA5vy+cl39EsKpY3C+tXEGrA==

## kube user 에 대한 secret key 생성 및 조회
$ ceph auth get-or-create client.kube
AQCt/BpYigJ7MRAA5vy+cl39EsKpY3C+tXEGrA==


## kube-node01, kube-node02 서버에 kube key 와 ceph.conf 추가
$ ssh stack@192.168.30.15 sudo mkdir -p /etc/ceph
$ ceph auth get-or-create client.kube | ssh stack@192.168.30.15 sudo tee /etc/ceph/ceph.client.kube.keyring
$ cat /etc/ceph/ceph.conf | ssh stack@192.168.30.15 sudo tee /etc/ceph/ceph.conf
$ ssh stack@192.168.30.15 sudo chown -R stack.stack /etc/ceph

$ ssh stack@192.168.30.16 sudo mkdir -p /etc/ceph
$ ceph auth get-or-create client.kube | ssh stack@192.168.30.16 sudo tee /etc/ceph/ceph.client.kube.keyring
$ cat /etc/ceph/ceph.conf | ssh stack@192.168.30.16 sudo tee /etc/ceph/ceph.conf
$ ssh stack@192.168.30.16 sudo chown -R stack.stack /etc/ceph



[ kube-node01, kube-node02 에 접속 ]
## ceph rbd client (ceph-common) 와 ceph fs client 설치 (ceph-fs-common)
$ sudo apt-get -y install ceph-common ceph-fs-common



########################################
## ceph rbd 로 연결하는 방식
########################################

## https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/rbd
## https://github.com/ceph/ceph-docker/tree/master/examples/kubernetes

[ ceph-admin 노드에서 ]

## kube keyring 파일 넣기
$ sudo vi /etc/ceph/ceph.client.kube.keyring
[client.kube]
    key = AQCt/BpYigJ7MRAA5vy+cl39EsKpY3C+tXEGrA==


## rbd 이미지 생성
## http://karan-mj.blogspot.kr/2013/12/ceph-installation-part-3.html

$ rbd create ceph-rbd-test --pool kubes --name client.kube --size 1G -k /etc/ceph/ceph.client.kube.keyring

$ rbd list --pool kubes --name client.kube -k /etc/ceph/ceph.client.kube.keyring
$ rbd -p kubes ls


## Jewel 의 새기능은 현재 대부분의 OS 에서 mount 문제가 있어 image 기능을 제거 해야 함
$ rbd feature disable ceph-rbd-test fast-diff --pool kubes --name client.kube -k /etc/ceph/ceph.client.kube.keyring
$ rbd feature disable ceph-rbd-test deep-flatten --pool kubes --name client.kube -k /etc/ceph/ceph.client.kube.keyring
$ rbd feature disable ceph-rbd-test object-map --pool kubes --name client.kube -k /etc/ceph/ceph.client.kube.keyring
$ rbd feature disable ceph-rbd-test exclusive-lock --pool kubes --name client.kube -k /etc/ceph/ceph.client.kube.keyring

$ rbd info ceph-rbd-test --pool kubes --name client.kube -k /etc/ceph/ceph.client.kube.keyring
$ rbd --image ceph-rbd-test -p kubes info

$ rbd remove ceph-rbd-test --pool kubes --name client.kube -k /etc/ceph/ceph.client.kube.keyring


## secret yaml 을 만들기 위해 key 를 base64 로 인코딩 함
$ grep key /etc/ceph/ceph.client.kube.keyring |awk '{printf "%s", $NF}'|base64
QVFDdC9CcFlpZ0o3TVJBQTV2eStjbDM5RXNLcFkzQyt0WEVHckE9PQ==




[ kube-deploy 접속 ]

## secret key 를 pod 로 생성하여 접속
$ vi ~/kube/ceph-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
data:
  key: QVFDdC9CcFlpZ0o3TVJBQTV2eStjbDM5RXNLcFkzQyt0WEVHckE9PQ==

$ scp ~/kube/ceph-secret.yaml kube-master01:~/kube/.
$ ssh kube-master01 "kubectl create -f ~/kube/ceph-secret.yaml"
$ kubectl -s http://kube-master01:8080 get secrets


## rbd-with-secret pod 생성해서 rbd 활용
$ vi ~/kube/rbd-with-secret.yml
apiVersion: v1
kind: Pod
metadata:
  name: rbd2
spec:
  containers:
  - image: gcr.io/google_containers/busybox
    command:
    - sleep
    - "3600"
    imagePullPolicy: IfNotPresent
    name: rbd-rw-busybox
    volumeMounts:
    - mountPath: "/mnt/rbd"
      name: rbdpd
  volumes:
  - name: rbdpd
    rbd:
      monitors:
      - 192.168.30.23:6789
      - 192.168.30.24:6789
      - 192.168.30.25:6789
      pool: kubes
      image: ceph-rbd-test
      user: kube
      keyring: /etc/ceph/ceph.client.kube.keyring
      secretRef:
        name: ceph-secret
      fsType: ext4
      readOnly: false


$ scp ~/kube/rbd-with-secret.yml kube-master01:~/kube/.
$ ssh kube-master01 "kubectl create -f ~/kube/rbd-with-secret.yml"
$ kubectl -s http://kube-master01:8080 get pods




## rbd 연결 확인
$ kubectl -s http://kube-master01:8080 describe pods rbd2
$ kubectl -s http://kube-master01:8080 exec -it rbd2 -- df -h



[ kube-node02 접속하여 ]

$ docker ps
$ docker inspect --format '{{ .Mounts }}' 4c4070a1393b

## 혹은
$ mount |grep kub
/dev/rbd0 on /var/lib/kubelet/plugins/kubernetes.io/rbd/rbd/kubes-image-ceph-rbd-test type ext4 (rw,relatime,stripe=1024,data=ordered)
/dev/rbd0 on /var/lib/kubelet/pods/061973fc-a265-11e6-940f-5cb9018c67dc/volumes/kubernetes.io~rbd/rbdpd type ext4 (rw,relatime,stripe=1024,data=ordered)




[ kube-deploy 접속해서 ]

## secret key pod 를 사용하지 않고 keyring 으로만 rbd pod 생성
$ vi ~/kube/rbd.yml
apiVersion: v1
kind: Pod
metadata:
  name: rbd
spec:
  containers:
  - image: gcr.io/google_containers/busybox
    command:
    - sleep
    - "3600"
    imagePullPolicy: IfNotPresent
    name: rbd-rw-busybox
    volumeMounts:
    - mountPath: "/mnt/rbd"
      name: rbdpd
  volumes:
  - name: rbdpd
    rbd:
      monitors:
      - 192.168.30.23:6789
      - 192.168.30.24:6789
      - 192.168.30.25:6789
      pool: kubes
      image: ceph-rbd-test
      user: kube
      keyring: /etc/ceph/ceph.client.kube.keyring
      fsType: ext4
      readOnly: false


$ scp ~/kube/rbd.yml kube-master01:~/kube/.
$ ssh kube-master01 "kubectl create -f ~/kube/rbd.yml"
$ kubectl -s http://kube-master01:8080 get pods

## rbd 연결 확인

$ kubectl -s http://kube-master01:8080 exec -it rbd -- df -h 


저작자 표시 변경 금지
신고
Posted by OpenStack Korea 부대표 Seungkyu Ahn(seungkyua@gmail.com


저작자 표시 변경 금지
신고
Posted by OpenStack Korea 부대표 Seungkyu Ahn(seungkyua@gmail.com

kubernetes 의 Authentication 과 Authorization 활용


[ Authentication ]

  • client-ca-file 로 접속
  • static password file 사용
  • static token file 사용
  • OpenStack Keystone 사용
먼저 client-ca-file 은 Authorization 을 사용할 수 없으니 제외

static password 도 잘됨, 그러나 Authorization 을 연동해 보지는 않았음.

OpenStack Keystone 연계는 아직 알파버전이고 수정이 자주 일어나서 아직 소스까지 볼 단계는 아니라 생략.

static token 방식은 Authorization 과도 잘 연동되므로 이걸 활용



## uuid generate

$ cat /proc/sys/kernel/random/uuid



## {{uuid}} 는 위에서 제너레이션 된 값으로 대치

$ sudo vi /etc/default/kube-token

{{uuid}},admin,1

{{uuid}},ahnsk,2,"tfabric,group1"

{{uuid}},stack,3,tfabric



## api 서버에 token file 옵션 추가

$ sudo chown stack.root /etc/default/kube-token

--token-auth-file=/etc/default/kube-token \


$ sudo systemctl restart kube-apiserver.service


$ kubectl -s https://kube-master01:6443 --token={{uuid}} get node





[ Authorization ]

  • ABAC Mode
  • RBAC Mode
RBAC 는 아직 알파라 베타인 ABAC 를 활용

## 전체 admin : admin,   tfabric admin : ahnsk,      tfabric readOnly user : stack
kubectl 이 api version 을 체크하기 때문에 무조건 nonResourcePath 도 all 로 지정해야 함

$ sudo vi /etc/default/kube-rbac.json
{"apiVersion":"abac.authorization.kubernetes.io/v1beta1","kind":"Policy","spec":{"user":"system:serviceaccount:kube-system:default","namespace":"*","resource":"*","apiGroup":"*", "nonResourcePath": "*"}}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"admin", "namespace": "*", "resource": "*", "apiGroup": "*", "nonResourcePath": "*" }}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"ahnsk", "namespace": "tfabric", "resource": "*", "apiGroup": "*", "nonResourcePath": "*" }}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"stack", "namespace": "tfabric", "resource": "*", "apiGroup": "*", "readonly": true, "nonResourcePath": "*"}}



kube-system 이 kube-apiserver 에 접근하기 위해서는 1라인이 반드시 있어야 함





$ sudo vi /etc/default/kube-apiserver

--authorization-mode=ABAC \

--authorization-policy-file=/etc/default/kube-rbac.json \


$ sudo systemctl restart kube-apiserver.service




$ cd ~/kube

$ vi busybox-tfabric.yaml

apiVersion: v1

kind: Pod

metadata:

  name: busybox

  namespace: tfabric

spec:

  containers:

  - image: gcr.io/google_containers/busybox

    command:

      - sleep

      - "3600"

    imagePullPolicy: IfNotPresent

    name: busybox

  restartPolicy: Always



$ kubectl -s https://kube-master01:6443 --token={{uuid}} --v=8 version


token 지정을 매번 하기 귀찮으니 config context 를 활용하는 것이 좋음.

이건 다음에....















저작자 표시 변경 금지
신고
Posted by OpenStack Korea 부대표 Seungkyu Ahn(seungkyua@gmail.com

## https://www.kernel.org/doc/Documentation/scheduler/sched-bwc.txt


보통 cpu.cfs_period_us 는 cpu.cfs_quota 와 같이 사용되어 계산되는데 계산법은 다음과 같다. (단위 : microsecond)

1 CPU 사용률 = cpu.cfs_quota_us / cpu.cfs_period_us * 100

cgoup에서 1cpu 의 20% 를 사용한다면 아래와 세팅하면 된다.

# echo 10000 > cpu.cfs_quota_us /* quota = 10ms */
# echo 50000 > cpu.cfs_period_us /* period = 50ms */


Kubernetes 에선 limit resources 로 cpu 와 memory 를 다음과 같이 할당하다.
--limits='cpu=200m,memory=512Mi'

cpu 는 200m 로 할당하였는데 이는 cpu.cfs_period_us 값으로 1 sec (1000 millisecond) 기준당 0.2 sec 를 할당한다는 의미이다. (최대값은 1sec == 1000000 microsecond 이므로 1000000 임)
보통 cgroup의 디폴트 값은 cpu.cfs_quota=-1 이므로 위의 같은 경우는 20% 의 cpu를 할당하는 의미이다. 




저작자 표시 변경 금지
신고
Posted by OpenStack Korea 부대표 Seungkyu Ahn(seungkyua@gmail.com
###################################
## kube-dns (skydns) 설치, dashboard 설치
###################################
## 검색창에 [kubernetes] 을 넣고 검색을 하면 됨


$ cd ~/kube
$ export KUBE_ROOT=/home/ubuntu/go_workspace/src/k8s.io/kubernetes
$ export DNS_REPLICAS=1
$ export DNS_DOMAIN=cluster.local
$ export DNS_SERVER_IP=192.168.30.200

$ sed -e "s/\\\$DNS_REPLICAS/${DNS_REPLICAS}/g;\
s/\\\$DNS_DOMAIN/${DNS_DOMAIN}/g;" \
"${KUBE_ROOT}/cluster/addons/dns/skydns-rc.yaml.sed" > skydns-rc.yaml

## skydns-rc.yaml 에 kube-master-url 추가
$ vi skydns-rc.yaml
81         - --kube-master-url=http://192.168.30.13:8080


$ sed -e "s/\\\$DNS_SERVER_IP/${DNS_SERVER_IP}/g" \
"${KUBE_ROOT}/cluster/addons/dns/skydns-svc.yaml.sed" > skydns-svc.yaml

$ cat <<EOF >namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: kube-system
EOF


$ cp ~/go_workspace/src/k8s.io/kubernetes/cluster/addons/dashboard/dashboard-controller.yaml ~/kube/.
$ cp ~/go_workspace/src/k8s.io/kubernetes/cluster/addons/dashboard/dashboard-service.yaml ~/kube/.


## kube-master01 에 복사
$ ssh kube-master01 "mkdir -p kube"
$ scp ~/kube/skydns-rc.yaml kube-master01:~/kube/.
$ scp ~/kube/skydns-svc.yaml kube-master01:~/kube/.
$ scp ~/kube/namespace.yaml kube-master01:~/kube/.

$ scp ~/kube/dashboard-controller.yaml kube-master01:~/kube/.
$ scp ~/kube/dashboard-service.yaml kube-master01:~/kube/.

$ ssh kube-master01 "kubectl create -f ~/kube/namespace.yaml"
$ ssh kube-master01 "kubectl --namespace=kube-system create -f ~/kube/skydns-rc.yaml"
$ ssh kube-master01 "kubectl --namespace=kube-system create -f ~/kube/skydns-svc.yaml"



## dashboard 설치
$ cd ~/kube

## master api 에 접속할 수 있게 정보를 추가
$ vi kubernetes-dashboard.yaml
47         - --apiserver-host=http://192.168.30.13:8080

$ scp ~/kube/kubernetes-dashboard.yaml kube-master01:~/kube/.
$ ssh kube-master01 "kubectl create -f ~/kube/kubernetes-dashboard.yaml" 



## skydns 설치 확인
$ vi ~/kube/busybox.yaml
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - image: gcr.io/google_containers/busybox
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
    name: busybox
  restartPolicy: Always


$ scp ~/kube/busybox.yaml kube-master01:~/kube/.
$ ssh kube-master01 "kubectl create -f ~/kube/busybox.yaml"


## -s 옵션으로 api 서버 지정 가능
$ kubectl -s http://kube-master01:8080 get pods --all-namespaces -o wide
$ kubectl -s http://kube-master01:8080 describe pod kube-dns-v19-d1tse --namespace=kube-system

$ kubectl -s http://kube-master01:8080 get pods busybox -o wide
$ kubectl -s http://kube-master01:8080 exec busybox -- nslookup kubernetes.default




###################################
## kubectl 사용법
###################################

## nginx pod 생성  (replication controller 대신 replica set 이 생겼음)
$ kubectl -s http://kube-master01:8080 run nginx --image=nginx (--replicas=2) --port=80
$ kubectl -s http://kube-master01:8080 get pods
$ kubectl -s http://kube-master01:8080 get pods -o wide
$ kubectl -s http://kube-master01:8080 get pod -l run=nginx


## nginx scaling
$ kubectl -s http://kube-master01:8080 scale deployment/nginx --replicas=1

## nginx auto scaling
$ kubectl -s http://kube-master01:8080 autoscale deployment/nginx --min=1 --max=3


## nginx rc 조회
$ kubectl -s http://kube-master01:8080 get rs



## 서비스 멈춤없이 이미지 버전 올리기 (edit 로 파일 수정하면 됨)
$ kubectl -s http://kube-master01:8080 edit deployment/nginx


## nginx service 생성 (port : host에 노출되는 port,   target-port : docker 내부에서 뜨는 port)
$ kubectl -s http://kube-master01:8080 expose deployment nginx --port=8080 --target-port=80 (--type=LoadBalancer) --name=nginx
$ kubectl -s http://kube-master01:8080 get services


## nginx pod 삭제
$ kubectl -s http://kube-master01:8080 get deployment (nginx)
$ kubectl -s http://kube-master01:8080 delete deployment nginx

## nginx service 삭제
$ kubectl -s http://kube-master01:8080 delete service nginx

## nginx pod, service 동시 삭제
$ kubectl -s http://kube-master01:8080 delete deployment,service nginx
$ kubectl -s http://kube-master01:8080 delete deployments/nginx services/nginx

## nginx pod, service 동시 삭제 (label 활용 : -l or --selector)
$ kubectl -s http://kube-master01:8080 delete deployment,services -l app=nginx


## nginx replication controller 삭제 (디폴트는 pod 도 삭제됨, --cascade=false : rc 만 삭제) 
$ kubectl -s http://kube-master01:8080 delete rc nginx-rc
$ kubectl -s http://kube-master01:8080 delete rc nginx-rc --cascade=false


## nginx 안으로 들어가기
$ kubectl -s http://kube-master01:8080 exec -it nginx-3449338310-sl1ou -- /bin/bash

## pod 내의 컨테이너 로그 보기
$ kubectl -s http://kube-master01:8080 logs -f nginx-3449338310-sl1ou
$ kubectl -s http://kube-master01:8080 logs --tail=20 nginx-3449338310-sl1ou
$ kubectl -s http://kube-master01:8080 logs --since=1m nginx-3449338310-sl1ou




## Guestbook (redis-master, redis-slave, frontend) 샘플
$ ~/go_workspace/src/k8s.io/kubernetes
$ kubectl -s http://kube-master01:8080 create -f examples/guestbook/


## Service  frontend 타입을 ClusterIP 에서 NodePort 로 변경
$ kubectl -s http://kube-master01:8080 edit services/frontend
28   type: NodePort


## json format 으로 output 보기
$ kubectl -s http://kube-master01:8080 get svc frontend -o json
$ kubectl -s http://kube-master01:8080 get svc frontend -o "jsonpath={.spec.ports[0].nodePort}{"\n"}"


## Guestbook 삭제
$ kubectl -s http://kube-master01:8080 delete deployments,services -l "app in (redis, guestbook)"



## 명령어로 pod 생성하는 방법
$ kubectl -s http://kube-master01:8080 run frontend --image=gcr.io/google-samples/gb-frontend:v4 \
--env="GET_HOSTS_FROM=dns" --port=80 --replicas=3 \
--limits="cpu=100m,memory=100Mi" \
--labels="app=guestbook,tier=frontend"

## Service frontend 를 명령어로 NodePort 타입으로 생성
$ kubectl -s http://kube-master01:8080 expose deployment frontend \
--port=80 --target-port=80 --name=frontend --type=NodePort \
--labels=app=guestbook,tier=frontend --selector=app=guestbook,tier=frontend





###################################
## glusterFS 설치 및 Kubernetes 에서 활용
###################################
## glusterFS 설치

## gluster01 과 gluster02 모두 수행
# vi /etc/hosts
192.168.30.15   kube-node01 gluster01
192.168.30.16   kube-node02 gluster02

# mkfs.xfs -f /dev/sdb
# mkdir -p /data/gluster/brick1 && chmod -R 777 /data
# echo '/dev/sdb /data/gluster/brick1 xfs defaults 1 2' >> /etc/fstab
# mount -a && mount

# apt-get install -y glusterfs-server glusterfs-client
# service glusterfs-server start


## gluster01 에서
# gluster peer probe gluster02

## gluster02 에서
# gluster peer probe gluster01


## gluster01 과 gluster02 모두 수행
# mkdir -p /data/gluster/brick1/gv0 && chmod -R 777 /data
# mkdir -p /data/gluster/brick1/gv1 && chmod -R 777 /data/gluster/brick1/gv1


## gluster01 에서 수행 (gv1 은 pod 에서 glusterfs 연결할 때 사용할 디스크)
# gluster volume create gv0 replica 2 gluster01:/data/gluster/brick1/gv0 gluster02:/data/gluster/brick1/gv0
# gluster volume start gv0

# gluster volume create gv1 replica 2 gluster01:/data/gluster/brick1/gv1 gluster02:/data/gluster/brick1/gv1
# gluster volume start gv1


## gluster01 과 gluster02 모두 수행
# mkdir -p /ext && chmod 777 -R /ext
# mount -t glusterfs gluster01:/gv0 /ext



## kubernetes 와 glusterFS 테스트
$ cd ~/go_workspace/src/k8s.io/kubernetes
$ cp examples/volumes/glusterfs/*.json ~/kube && cd ~/kube

$ vi glusterfs-endpoints.json
11           "ip": "192.168.30.15"
23           "ip": "192.168.30.16"

$ vi glusterfs-pod.json
11                 "image": "nginx"
25                     "path": "gv1",
26                     "readOnly": false


$ ssh kube-master01 "mkdir -p kube"
$ scp ~/kube/glusterfs-endpoints.json kube-master01:~/kube/.
$ scp ~/kube/glusterfs-service.json kube-master01:~/kube/.
$ scp ~/kube/glusterfs-pod.json kube-master01:~/kube/.

$ ssh kube-master01 "kubectl create -f ~/kube/glusterfs-endpoints.json"
$ ssh kube-master01 "kubectl create -f ~/kube/glusterfs-service.json"
$ ssh kube-master01 "kubectl create -f ~/kube/glusterfs-pod.json"


$ kubectl -s http://kube-master01:8080 get pods glusterfs -o jsonpath='{.status.hostIP}{"\n"}'

$ ssh kube-node01 "mount | grep gv1"






저작자 표시 변경 금지
신고
Posted by OpenStack Korea 부대표 Seungkyu Ahn(seungkyua@gmail.com

10.0.0.171    kafka01 zookeeper01

10.0.0.172    kafka02 zookeeper02

10.0.0.173    kafka03 zookeeper03


192.168.30.171    kafka01 zookeeper01

192.168.30.172    kafka02 zookeeper02

192.168.30.173    kafka03 zookeeper03


## kafka vm 생성

$ openstack flavor create --id ka1 --ram 8192 --disk 160 --vcpus 2 kafka


$ openstack server create --image 7498cf9d-bd2e-4401-9ae9-ca72120272ed \

       --flavor ka1  --nic net-id=03a6de58-9693-4c41-9577-9307c8750141,v4-fixed-ip=10.0.0.171 \

       --key-name magnum-key --security-group default kafka01

$ openstack ip floating create --floating-ip-address 192.168.30.171 public

$ openstack ip floating add 192.168.30.171 kafka01





## Oracle Java 8 설치

$ sudo add-apt-repository ppa:webupd8team/java

$ sudo apt-get update

$ sudo apt-get install oracle-java8-installer


## 여러 버전을 Java 를 설치했을 때 관리

$ sudo update-alternatives --config java





## zookeeper 설치

## https://zookeeper.apache.org/doc/r3.4.9/zookeeperStarted.html

## http://apache.mirror.cdnetworks.com/zookeeper/zookeeper-3.4.9/


$ mkdir -p downloads && cd downloads

$ wget http://apache.mirror.cdnetworks.com/zookeeper/zookeeper-3.4.9/zookeeper-3.4.9.tar.gz


$ sudo tar -C /usr/local -xzvf zookeeper-3.4.9.tar.gz

$ cd /usr/local

$ sudo ln -s zookeeper-3.4.9/ zookeeper


$ vi /usr/local/zookeeper/conf/zoo.cfg

tickTime=2000

dataDir=/var/lib/zookeeper

clientPort=2181

initLimit=5

syncLimit=2

server.1=zookeeper01:2888:3888

server.2=zookeeper02:2888:3888

server.3=zookeeper03:2888:3888


$ vi /usr/local/zookeeper/bin/zkEnv.sh

56     ZOO_LOG_DIR="/var/log/zookeeper"


$ sudo mkdir -p /var/log/zookeeper && sudo chown -R stack.stack /var/log/zookeeper


## zookeeper myid 는 서버마다 지정

$ sudo mkdir -p /var/lib/zookeeper && sudo chown -R stack.stack /var/lib/zookeeper

$ vi /var/lib/zookeeper/myid

1


$ vi ~/.bashrc

export JAVA_HOME=/usr/lib/jvm/java-8-oracle

export ZOOKEEPER_HOME=/usr/local/zookeeper

PATH=$PATH:$ZOOKEEPER_HOME/bin


$ . ~/.bashrc

$ zkServer.sh start


## zookeeper 설치 확인

$ zkCli.sh -server zookeeper01:2181





## Kafka 설치

## https://www.digitalocean.com/community/tutorials/how-to-install-apache-kafka-on-ubuntu-14-04


## https://kafka.apache.org/downloads.html

## https://kafka.apache.org/documentation.html


$ cd downloads

$ wget http://apache.mirror.cdnetworks.com/kafka/0.10.0.1/kafka_2.11-0.10.0.1.tgz

$ sudo tar -C /usr/local -xzvf kafka_2.11-0.10.0.1.tgz

$ cd /usr/local && sudo chown -R stack.stack kafka_2.11-0.10.0.1

$ sudo ln -s kafka_2.11-0.10.0.1/ kafka


## broker id 는 서버마다 고유하게 줘야 함

$ vi /usr/local/kafka/config/server.properties

20 broker.id=0

56 log.dirs=/var/lib/kafka

112 zookeeper.connect=zookeeper01:2181,zookeeper02:2181,zookeeper03:2181

117 delete.topic.enable = true


$ sudo mkdir -p /var/lib/kafka && sudo chown -R stack.stack /var/lib/kafka

$ sudo mkdir -p /var/log/kafka && sudo chown -R stack.stack /var/log/kafka


$ vi ~/.bashrc

export KAFKA_HOME=/usr/local/kafka

PATH=$PATH:$KAFKA_HOME/bin


$ . ~/.bashrc

$ nohup kafka-server-start.sh $KAFKA_HOME/config/server.properties > /var/log/kafka/kafka.log 2>&1 &


## kafkaT : kafka cluster 관리

$ sudo apt-get -y install ruby ruby-dev build-essential

$ sudo gem install kafkat --source https://rubygems.org --no-ri --no-rdoc

$ vi ~/.kafkatcfg

{

  "kafka_path": "/usr/local/kafka",

  "log_path": "/var/lib/kafka",

  "zk_path": "zookeeper01:2181,zookeeper02:2181,zookeeper03:2181"

}


## kafka partition 보기

$ kafkat partitions


## kafka data 테스트

$ echo "Hello, World" | kafka-console-producer.sh --broker-list kafka01:9092,kafka02:9092,kafka03:9092 --topic TutorialTopic > /dev/null

$ kafka-console-consumer.sh --zookeeper zookeeper01:2181,zookeeper02:2181,zookeeper03:2181 --topic TutorialTopic --from-beginning


$ kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test

$ kafka-topics.sh --list --zookeeper localhost:2181


$ kafka-console-producer.sh --broker-list localhost:9092 --topic test

This is a message

This is another message


$ kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning


## Replica 3 테스트

$ kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic my-replicated-topic

$ kafka-topics.sh --describe --zookeeper localhost:2181 --topic my-replicated-topic

Topic:my-replicated-topic    PartitionCount:1    ReplicationFactor:3    Configs:

    Topic: my-replicated-topic    Partition: 0    Leader: 0    Replicas: 0,2,1    Isr: 0,2,1


$ kafka-console-producer.sh --broker-list localhost:9092 --topic my-replicated-topic

my test message 1

my test message 2

^C


$ kafka-console-consumer.sh --zookeeper localhost:2181 --from-beginning --topic my-replicated-topic


## 서버 한대 다운

$ kafka-server-stop.sh


$ kafka-topics.sh --describe --zookeeper localhost:2181 --topic my-replicated-topic

Topic:my-replicated-topic    PartitionCount:1    ReplicationFactor:3    Configs:

    Topic: my-replicated-topic    Partition: 0    Leader: 0    Replicas: 0,2,1    Isr: 0,1


$ kafka-console-consumer.sh --zookeeper localhost:2181 --from-beginning --topic my-replicated-topic


## 토픽 삭제

$ kafka-topics.sh --delete --zookeeper localhost:2181 --topic my-replicated-topic

$ kafka-topics.sh --delete --zookeeper localhost:2181 --topic TutorialTopic









저작자 표시 변경 금지
신고
Posted by OpenStack Korea 부대표 Seungkyu Ahn(seungkyua@gmail.com