Use localectl tool or edit /etc/locale.conf
1. Change locale.conf
# cat /etc/locale.conf
LANG=pt_BR.utf8
LANGUAGE=
LC_CTYPE="pt_BR.utf8"
LC_NUMERIC="pt_BR.utf8"
LC_TIME="pt_BR.utf8"
LC_COLLATE=C
LC_MONETARY="pt_BR.utf8"
LC_MESSAGES="pt_BR.utf8"
LC_PAPER="pt_BR.utf8"
LC_NAME="pt_BR.utf8"
LC_ADDRESS="pt_BR.utf8"
LC_TELEPHONE="pt_BR.utf8"
LC_MEASUREMENT="pt_BR.utf8"
LC_IDENTIFICATION="pt_BR.utf8"
LC_ALL=
2. Reboot the system
# reboot
What happens if some apps doesn't show the new language? Probably the apps doesn't support different languages (locales).
Reference: https://www.osetc.com/en/centos-7-rhel-7-change-the-system-locale.html
Thursday, October 31, 2019
Tuesday, October 29, 2019
Patches of day:
INSTALL - make sure postgres pod is running with label app set
https://github.com/ansible/awx/pull/5134/commits/1cfa0fe2f6702a8721e816b04e6ed40301edc95b
Monday, October 28, 2019
Patches of day
Project: subscription-manager
Subject: managercli: use encode('utf-8') in strings
https://github.com/candlepin/subscription-manager/pull/2179
Project: awx
Subject: [docs] INSTALL - make sure postgres pod is running with label name set
https://github.com/ansible/awx/pull/5134
Subject: managercli: use encode('utf-8') in strings
https://github.com/candlepin/subscription-manager/pull/2179
Project: awx
Subject: [docs] INSTALL - make sure postgres pod is running with label name set
https://github.com/ansible/awx/pull/5134
subscription-manager commands
Register
# subscription-manager register
Not sure about specific pool id?
# subscription-manager list --available --all
Register with a specific pool
# subscription-manager attach --pool=POOL_ID
Attach a subscription from any available that match the system
# subscription-manager attach --auto
Disable repo:
# subscription-manager repos --disable ${REPO_NAME}
Enable repo:
# subscription-manager repos --enable ${REPO_NAME}
List repos assigned to your account:
# subscription-manager repos --list More info: https://access.redhat.com/solutions/253273
List all enabled repos:
# subscription-manager repos --list-enabled
# subscription-manager register
Not sure about specific pool id?
# subscription-manager list --available --all
Register with a specific pool
# subscription-manager attach --pool=POOL_ID
Attach a subscription from any available that match the system
# subscription-manager attach --auto
Disable repo:
# subscription-manager repos --disable ${REPO_NAME}
Enable repo:
# subscription-manager repos --enable ${REPO_NAME}
List repos assigned to your account:
# subscription-manager repos --list More info: https://access.redhat.com/solutions/253273
List all enabled repos:
# subscription-manager repos --list-enabled
Assign Pods to Nodes - nodeSelector
See current labels in the nodes:
# kubectl get nodes --show-labels
Creating a label for a specific host:
# kubectl label nodes {NODE_NAME} cputype=xeon
Assigning the label cputype=xeon to the pod
# kubectl edit deployment {MY_APP}
apiVersion: v1
kind: Pod
metadata:
name: superapp
labels:
env: test
spec:
containers:
- name: superapp
image: superapp
imagePullPolicy: IfNotPresent
nodeSelector:
cputype=xeon
Check if pod is now in the new node
# kubectl get pods -o wide
Resource: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/
# kubectl get nodes --show-labels
Creating a label for a specific host:
# kubectl label nodes {NODE_NAME} cputype=xeon
Assigning the label cputype=xeon to the pod
# kubectl edit deployment {MY_APP}
apiVersion: v1
kind: Pod
metadata:
name: superapp
labels:
env: test
spec:
containers:
- name: superapp
image: superapp
imagePullPolicy: IfNotPresent
nodeSelector:
cputype=xeon
Check if pod is now in the new node
# kubectl get pods -o wide
Resource: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/
Sunday, October 27, 2019
Kibana + ElasticSearch + filebeat (image 7.4.1)
# kubectl create ns logging
# helm repo add elastic https://helm.elastic.co
# helm install elastic/elasticsearch --name elasticsearch --set elasticsearch.enabled=true --set data.terminationGracePeriodSeconds=0 --namespace=logging --set imageTag=7.4.1
# helm install elastic/kibana --name kibana --set elasticsearchHosts=http://elasticsearch-master.logging.svc.cluster.local:9200 --namespace logging --set imageTag=7.4.1
# helm install elastic/filebeat --name filebeat --namespace logging --set imageTag=7.4.1
# helm repo add elastic https://helm.elastic.co
# helm install elastic/elasticsearch --name elasticsearch --set elasticsearch.enabled=true --set data.terminationGracePeriodSeconds=0 --namespace=logging --set imageTag=7.4.1
# helm install elastic/kibana --name kibana --set elasticsearchHosts=http://elasticsearch-master.logging.svc.cluster.local:9200 --namespace logging --set imageTag=7.4.1
# helm install elastic/filebeat --name filebeat --namespace logging --set imageTag=7.4.1
Saturday, October 26, 2019
helm cleanup
# !/bin/bash
# Remove all files in these directories.
rm -rf ~/.helm/cache/archive/*
rm -rf ~/.helm/repository/cache/*
# Refreash repository configurations
helm repo update
#That's all.
#If you "helm search" next time, you can find newest stable charts in repository.
Source: https://gist.github.com/naotookuda/db74069729ba0b7740b658b689cba65b
# Remove all files in these directories.
rm -rf ~/.helm/cache/archive/*
rm -rf ~/.helm/repository/cache/*
# Refreash repository configurations
helm repo update
#That's all.
#If you "helm search" next time, you can find newest stable charts in repository.
Source: https://gist.github.com/naotookuda/db74069729ba0b7740b658b689cba65b
helm autocomplete - setting editor and alias
$ helm completion bash > /etc/bash_completion.d/helm.sh
$ echo "alias h='helm'" >> ~/.bashrc
$ echo "source <(helm completion bash)" >> ~/.bashrc
$ echo "complete -o default -F __start_helm h" >> ~/.bashrc
$ echo "export KUBE_EDITOR=vi" >> ~/.bashrc
$ echo "export EDITOR=vi" >> ~/.bashrc
$ echo "export HELM_EDITOR=vi" >> ~/.bashrc
$ source ~/.bashrc
$ echo "alias h='helm'" >> ~/.bashrc
$ echo "source <(helm completion bash)" >> ~/.bashrc
$ echo "complete -o default -F __start_helm h" >> ~/.bashrc
$ echo "export KUBE_EDITOR=vi" >> ~/.bashrc
$ echo "export EDITOR=vi" >> ~/.bashrc
$ echo "export HELM_EDITOR=vi" >> ~/.bashrc
$ source ~/.bashrc
Thursday, October 24, 2019
kubernetes: creating a namespace
$ cat ns-development.yaml
{
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"name": "development",
"labels": {
"name": "development"
}
}
}
$ kubectl apply -f ns-development.yaml
To see the new namespace created:
$ kubectl get ns
OR simple create via kubectl:
$ kubectl create ns development
{
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"name": "development",
"labels": {
"name": "development"
}
}
}
$ kubectl apply -f ns-development.yaml
To see the new namespace created:
$ kubectl get ns
OR simple create via kubectl:
$ kubectl create ns development
Wednesday, October 23, 2019
coredns: debugging - See logs
1. First open configmap from coredns
$ kubectl -n kube-system edit configmap coredns
2. Add log keyword
apiVersion: v1
data:
    Corefile: |
        .:53 {
        log
        errors
        health
       .........
3. Wait a little bit to propagate the change
4. Execute the DSN query 5. Look for the logs
$ for p in $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name); do kubectl logs --namespace=kube-system $p; done
You can also use dnstools:
$ kubectl exec -it -n default dnstools-74dd77498d-r8klc /bin/sh
dnstools# nslookup
> server 10.233.0.3
Default server: 10.233.0.3
Address: 10.233.0.3#53
> postgres.default.svc.cluster.local
Server: 10.233.0.3
Address: 10.233.0.3#53
Name: postgres.default.svc.cluster.local
Address: 10.233.59.55
More info: https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/
$ kubectl -n kube-system edit configmap coredns
2. Add log keyword
apiVersion: v1
data:
    Corefile: |
        .:53 {
        log
        errors
        health
       .........
3. Wait a little bit to propagate the change
4. Execute the DSN query 5. Look for the logs
$ for p in $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name); do kubectl logs --namespace=kube-system $p; done
You can also use dnstools:
$ kubectl exec -it -n default dnstools-74dd77498d-r8klc /bin/sh
dnstools# nslookup
> server 10.233.0.3
Default server: 10.233.0.3
Address: 10.233.0.3#53
> postgres.default.svc.cluster.local
Server: 10.233.0.3
Address: 10.233.0.3#53
Name: postgres.default.svc.cluster.local
Address: 10.233.59.55
More info: https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/
postgres: alter postgres password
# psql -U postgres
postgres> ALTER USER postgres with password 'my-super-password';
postgres> ALTER USER postgres with password 'my-super-password';
traefik: {"level":"error","msg":"failed to load X509 key pair: tls: failed to find any PEM data in certificate input","time":"2019-10-23T03:56:29Z"}
Well, this error can be several issues around self signed certificate, I will list the main reason:
Make sure defaultKey and defaultCert are in base64 format
# cat traefik.yaml
dashboard:
domain: traefik.medogz.home
enabled: true
kubernetes.ingressEndpoint.publishedServic: kube-system
loadBalancerIP: 192.168.1.2
kubernetes:
ingressEndpoint:
useDefaultPublishedService: true
ssl:
enabled: true
enforced: true
insecureSkipVerify: true
generateTLS: false
tlsMinVersion: VersionTLS12
defaultKey: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQ.........
defaultCert: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1.........
Make sure defaultKey and defaultCert are in base64 format
# cat traefik.yaml
dashboard:
domain: traefik.medogz.home
enabled: true
kubernetes.ingressEndpoint.publishedServic: kube-system
loadBalancerIP: 192.168.1.2
kubernetes:
ingressEndpoint:
useDefaultPublishedService: true
ssl:
enabled: true
enforced: true
insecureSkipVerify: true
generateTLS: false
tlsMinVersion: VersionTLS12
defaultKey: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQ.........
defaultCert: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1.........
Fedora: NetworkManager + vpn + l2tp packages
# dnf install NetworkManager-l2tp NetworkManager-l2tp-gnome NetworkManager-strongswan-gnome NetworkManager-strongswan -y
# systemctl restart NetworkManager
# systemctl restart NetworkManager
Kubespray + CentOS8
I have tested the following patches and worked:
https://github.com/kubernetes-sigs/kubespray/pull/5213/commits
https://github.com/kubernetes-sigs/kubespray/pull/5213/commits
Tuesday, October 22, 2019
kubeapps: ingress via traefik
$ cat ingress-kubeapps.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
kubernetes.io/ingress.class: traefik
labels:
created-by: kubeapps
name: kubeapps
name: kubeapps
namespace: kubeapps
spec:
rules:
- host: kubeapps.medogz.home
http:
paths:
- backend:
serviceName: kubeapps
servicePort: 80
path: /
tls: []
$ kubectl apply -f ingress-kubeapps.yaml
Download here in raw format
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
kubernetes.io/ingress.class: traefik
labels:
created-by: kubeapps
name: kubeapps
name: kubeapps
namespace: kubeapps
spec:
rules:
- host: kubeapps.medogz.home
http:
paths:
- backend:
serviceName: kubeapps
servicePort: 80
path: /
tls: []
$ kubectl apply -f ingress-kubeapps.yaml
Download here in raw format
kubernetes: Installing kubeapps and adding ingress via traefik
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm install --name kubeapps --namespace kubeapps bitnami/kubeapps
$ kubectl create serviceaccount kubeapps-operator
$ kubectl create clusterrolebinding kubeapps-operator --clusterrole=cluster-admin --serviceaccount=default:kubeapps-operator
$ kubectl apply -f ingress-kubeapps.yaml
$ kubectl get secret $(kubectl get serviceaccount kubeapps-operator -o jsonpath='{.secrets[].name}') -o jsonpath='{.data.token}' -o go-template='{{.data.token | base64decode}}' && echo
Access: kubeapps.medogz.home and use the token from get secret command
More info: https://github.com/kubeapps/kubeapps/blob/master/docs/user/getting-started.md
$ helm install --name kubeapps --namespace kubeapps bitnami/kubeapps
$ kubectl create serviceaccount kubeapps-operator
$ kubectl create clusterrolebinding kubeapps-operator --clusterrole=cluster-admin --serviceaccount=default:kubeapps-operator
$ kubectl apply -f ingress-kubeapps.yaml
$ kubectl get secret $(kubectl get serviceaccount kubeapps-operator -o jsonpath='{.secrets[].name}') -o jsonpath='{.data.token}' -o go-template='{{.data.token | base64decode}}' && echo
Access: kubeapps.medogz.home and use the token from get secret command
More info: https://github.com/kubeapps/kubeapps/blob/master/docs/user/getting-started.md
traefik: Edit deployment
To avoid duplicate traefik pods, set scale to 0 and later to 1.
$ kubectl scale deployment --replicas=0 traefik -n kube-system
$ kubectl edit deploy --namespace kube-system traefik
$ kubectl scale deployment --replicas=1 traefik -n kube-system
$ kubectl scale deployment --replicas=0 traefik -n kube-system
$ kubectl edit deploy --namespace kube-system traefik
$ kubectl scale deployment --replicas=1 traefik -n kube-system
traefik: get secrets and convert to original text format
$ kubectl get secrets traefik-cert -o yaml
apiVersion: v1
data:
wildcard.medogz.home.crt:
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUUrRENDQXVDZ0F3SUJBZ0lKQUw1VTMvQWNMOFFMTUEwR0NTcUdTSWIzRFFFQkN3VUFNSUdFTVFzd0NRWUQKVlFRR0V3SlZVekVXTUJRR0ExVUVDQXdOVG1WM0lFaGhiWEJ6YUdseVpURVBNQTBHQTFVRUJ3d0dUbUZ6YUhWaApNUTh3RFFZRFZRUUtEQVpOWldSdlozb3hGekFWQmdOVkJBTU1EazFGUkU5SFdpQkRRU0JTVDA5VU1TSXdJQVlKCktvWklodmNOQVFrQkZoTmtiM1ZuYzJ4
aGJtUkFaMjFoYVd3dVkyOXRNQjRYRFRFNU1EZ3hNekl3TkRrek5sb1gKRFRJNE1UQXlPVEl3TkRrek5sb3dnWU14Q3pBSkJnTlZCQVlUQWxWVE1SWXdGQVlEVlFRSURBMU9aWGNnU0dGdApjSE5vYVhKbE1ROHdEUVlEVlFRSERBWk9ZWE5vZFdFeER6QU5CZ05WQkFvTUJrMWxaRzluZWpFV01CUUdBMVVFCkF3d05LaTV0WldSdlozb3VhRzl0WlRFaU1DQUdDU3FHU0liM0RRRUpBUllUWkc5MVozTnNZVzVrUUdkdFlXbHMKTG1OdmJUQ0NB
TGVtSwo1WnVQNWw5MTdtYituMEJsUU90QTBJOWVIUVJXUExEL25nMFRLTWR4UndQeE5FdzRPSlJiTGxoSk4zVWkydkp3Ck8zQXR0M3BSQ090SU00MVJsdmkwc3M3enNvdkpyOWlUTWhPVVpnenNPTUZSRmdtZjdNMytxWi9MMTU0QUhuWVEKMFFIamRoMENBd0VBQWFOc01Hb3dId1lEVlIwakJCZ3dGb0FVZTFrdmFOOGZlMjdQRjErNERtRFdWRnVxZTdJdwpDUVlEVlIwVEJBSXdBREFMQmdOVkhROEVCQU1DQlBBd0x3WURWUjBSQkNnd0pv
SVJiblZqTURFdWJXVmtiMmQ2CkxtaHZiV1dDRVc1MVl6QXlMbTFsWkc5bmVpNW9iMjFsTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElDQVFDV1JKRUEKMkt2RlY2U2F3Ky9sblYyeVY4UExOV1lsVUZLUDlRaExzMG9SUndBdEtSNUFvNUNwV1pnd08wS3NiLzVYMGdPNwpaWmpEbW5jZjNqMEcvVnpVUGM3SVRBZE9zZE1ocFFmb2JsWkZ1SmJTWG9CWG1uZUxDUko0cTRwbW5hUkQ2TFFyCnRqU0FINVRodkxqRGJuVnN3WC9SSStYUWVUZkRvbUJK
QmtuNFJkeUpzRnVTNlkvRTY2UWo2bDg0M2Z0cjRxem8KaFZGRjlmUjdTUzVJWnE1ZzhKSElVazJtaGFSNnZEYjRFaVFzU0g5ZW56Y1ZDaE9PeUpqdFE0ZDd2Ui9PVXU5MgozL1ViSFE3eWFuVWxaYWhPeUZpUW5DU01iZWtJcW5wV0NRQlBRSTBHSkdpNkVIU21rUGlveEN5dHd5Y1gvTVdHCjVmTHNEdnNGcDZFWENYQ0NvUlR2SVdCMTRZcWp1NWx2MXBwMDUybXk4M2s0ZjB3VEs4UUllajhDS1pNMUlUdTgKa0VkRFZuSzR0cXB2NURZb0Vj
K0NLeFpncmRTb0s4QzR1MkFBSGgvZVRxVk5aVk5ubnZGcGVQYVQzVUU2Y3VSbwp3bDhiQjRIa0xzcGNCcUdweUlicTN1YTdaZXlPUmVaeTk0NHhteWM4RlcrakJUUFYyUTA5cXBiYS9pTnBJWDMxCjh6S21SS29WYVd4Q0tGT21ZUUNIQnJmcWlBZDUrWXdYa29Jd2dOc0JmTi9DVjdPdWNGcXhBOTlZMXpCNVoyaFgKNEZSR1Fuc2t4RkVKWnBUaGNBdDNCemJOZ2l6M2hCOXNOQTZtWFZKWG5QcXlwelpaNlNLVXM5dm9ubk12bjBteAovSG9L
wildcard.medogz.home.key:
LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBMjJzNjl1NDU0a1VjbHYrSExibkFCZGRMOW1tVGZUMjlKdWVzRnl6bkcydkoyWkJKCktPaDFiUHo4WGVSOW5HbWpSampoSGFZK1BQcUF2ZG11dDJqc0hDYzNLMitPdUdOR2VaNldQS0x6a3krT1AreWoKRzFDMDBsYTJxVFcxcG5tVDZEK01ubVoybE9QVkRNUmd2VksyL0tsSnV6ejFqTTVYUWpEODY5Rk9MY0tzUmFhZwpLbysycmlvblVGbkVPV2lw
kind: Secret
metadata:
creationTimestamp: "2019-10-22T19:26:12Z"
name: traefik-cert
namespace: kube-system
resourceVersion: "576062"
selfLink: /api/v1/namespaces/kube-system/secrets/traefik-cert
uid: 0ae89b40-8707-486b-9bc3-25484b5569d9
type: Opaque
Time to see the output back to original state:
$ echo "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUUrRENDQXVDZ0F3SUJBZ0lKQUw1VTMvQWNMOFFMTUEwR0NTc....." | base64 -d
apiVersion: v1
data:
wildcard.medogz.home.crt:
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUUrRENDQXVDZ0F3SUJBZ0lKQUw1VTMvQWNMOFFMTUEwR0NTcUdTSWIzRFFFQkN3VUFNSUdFTVFzd0NRWUQKVlFRR0V3SlZVekVXTUJRR0ExVUVDQXdOVG1WM0lFaGhiWEJ6YUdseVpURVBNQTBHQTFVRUJ3d0dUbUZ6YUhWaApNUTh3RFFZRFZRUUtEQVpOWldSdlozb3hGekFWQmdOVkJBTU1EazFGUkU5SFdpQkRRU0JTVDA5VU1TSXdJQVlKCktvWklodmNOQVFrQkZoTmtiM1ZuYzJ4
aGJtUkFaMjFoYVd3dVkyOXRNQjRYRFRFNU1EZ3hNekl3TkRrek5sb1gKRFRJNE1UQXlPVEl3TkRrek5sb3dnWU14Q3pBSkJnTlZCQVlUQWxWVE1SWXdGQVlEVlFRSURBMU9aWGNnU0dGdApjSE5vYVhKbE1ROHdEUVlEVlFRSERBWk9ZWE5vZFdFeER6QU5CZ05WQkFvTUJrMWxaRzluZWpFV01CUUdBMVVFCkF3d05LaTV0WldSdlozb3VhRzl0WlRFaU1DQUdDU3FHU0liM0RRRUpBUllUWkc5MVozTnNZVzVrUUdkdFlXbHMKTG1OdmJUQ0NB
TGVtSwo1WnVQNWw5MTdtYituMEJsUU90QTBJOWVIUVJXUExEL25nMFRLTWR4UndQeE5FdzRPSlJiTGxoSk4zVWkydkp3Ck8zQXR0M3BSQ090SU00MVJsdmkwc3M3enNvdkpyOWlUTWhPVVpnenNPTUZSRmdtZjdNMytxWi9MMTU0QUhuWVEKMFFIamRoMENBd0VBQWFOc01Hb3dId1lEVlIwakJCZ3dGb0FVZTFrdmFOOGZlMjdQRjErNERtRFdWRnVxZTdJdwpDUVlEVlIwVEJBSXdBREFMQmdOVkhROEVCQU1DQlBBd0x3WURWUjBSQkNnd0pv
SVJiblZqTURFdWJXVmtiMmQ2CkxtaHZiV1dDRVc1MVl6QXlMbTFsWkc5bmVpNW9iMjFsTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElDQVFDV1JKRUEKMkt2RlY2U2F3Ky9sblYyeVY4UExOV1lsVUZLUDlRaExzMG9SUndBdEtSNUFvNUNwV1pnd08wS3NiLzVYMGdPNwpaWmpEbW5jZjNqMEcvVnpVUGM3SVRBZE9zZE1ocFFmb2JsWkZ1SmJTWG9CWG1uZUxDUko0cTRwbW5hUkQ2TFFyCnRqU0FINVRodkxqRGJuVnN3WC9SSStYUWVUZkRvbUJK
QmtuNFJkeUpzRnVTNlkvRTY2UWo2bDg0M2Z0cjRxem8KaFZGRjlmUjdTUzVJWnE1ZzhKSElVazJtaGFSNnZEYjRFaVFzU0g5ZW56Y1ZDaE9PeUpqdFE0ZDd2Ui9PVXU5MgozL1ViSFE3eWFuVWxaYWhPeUZpUW5DU01iZWtJcW5wV0NRQlBRSTBHSkdpNkVIU21rUGlveEN5dHd5Y1gvTVdHCjVmTHNEdnNGcDZFWENYQ0NvUlR2SVdCMTRZcWp1NWx2MXBwMDUybXk4M2s0ZjB3VEs4UUllajhDS1pNMUlUdTgKa0VkRFZuSzR0cXB2NURZb0Vj
K0NLeFpncmRTb0s4QzR1MkFBSGgvZVRxVk5aVk5ubnZGcGVQYVQzVUU2Y3VSbwp3bDhiQjRIa0xzcGNCcUdweUlicTN1YTdaZXlPUmVaeTk0NHhteWM4RlcrakJUUFYyUTA5cXBiYS9pTnBJWDMxCjh6S21SS29WYVd4Q0tGT21ZUUNIQnJmcWlBZDUrWXdYa29Jd2dOc0JmTi9DVjdPdWNGcXhBOTlZMXpCNVoyaFgKNEZSR1Fuc2t4RkVKWnBUaGNBdDNCemJOZ2l6M2hCOXNOQTZtWFZKWG5QcXlwelpaNlNLVXM5dm9ubk12bjBteAovSG9L
wildcard.medogz.home.key:
LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBMjJzNjl1NDU0a1VjbHYrSExibkFCZGRMOW1tVGZUMjlKdWVzRnl6bkcydkoyWkJKCktPaDFiUHo4WGVSOW5HbWpSampoSGFZK1BQcUF2ZG11dDJqc0hDYzNLMitPdUdOR2VaNldQS0x6a3krT1AreWoKRzFDMDBsYTJxVFcxcG5tVDZEK01ubVoybE9QVkRNUmd2VksyL0tsSnV6ejFqTTVYUWpEODY5Rk9MY0tzUmFhZwpLbysycmlvblVGbkVPV2lw
kind: Secret
metadata:
creationTimestamp: "2019-10-22T19:26:12Z"
name: traefik-cert
namespace: kube-system
resourceVersion: "576062"
selfLink: /api/v1/namespaces/kube-system/secrets/traefik-cert
uid: 0ae89b40-8707-486b-9bc3-25484b5569d9
type: Opaque
Time to see the output back to original state:
$ echo "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUUrRENDQXVDZ0F3SUJBZ0lKQUw1VTMvQWNMOFFMTUEwR0NTc....." | base64 -d
traefik: {"level":"warning","msg":"Endpoints not available for kube-system/gitea-http-service","time":"2019-10-22T19:33:18Z"}
Resolution: You must create gitea pod in the kube-system namespace
traefik error: ingresses.extensions is forbidden: User "system:serviceaccount:kube-system:default" cannot list resource "ingresses" in API group "extensions" at the cluster scope
E1022 14:54:08.904814 1 reflector.go:205] github.com/containous/traefik/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.Ingress: ingresses.extensions is forbidden: User "system:serviceaccount:kube-system:default" cannot list resource "ingresses" in API group "extensions" at the cluster scope
E1022 14:54:08.918146 1 reflector.go:205] github.com/containous/traefik/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:default" cannot list resource "services" in API group "" at the cluster scope
E1022 14:54:08.918944 1 reflector.go:205] github.com/containous/traefik/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Endpoints: endpoints is forbidden: User "system:serviceaccount:kube-system:default" cannot list resource "endpoints" in API group "" at the cluster scope
To solve that:
$ kubectl create clusterrolebinding --user system:serviceaccount:kube-system:default kube-system-cluster-admin --clusterrole cluster-admin
E1022 14:54:08.918146 1 reflector.go:205] github.com/containous/traefik/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:default" cannot list resource "services" in API group "" at the cluster scope
E1022 14:54:08.918944 1 reflector.go:205] github.com/containous/traefik/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Endpoints: endpoints is forbidden: User "system:serviceaccount:kube-system:default" cannot list resource "endpoints" in API group "" at the cluster scope
To solve that:
$ kubectl create clusterrolebinding --user system:serviceaccount:kube-system:default kube-system-cluster-admin --clusterrole cluster-admin
Enable kubectl autocompletion and set k as alias to kubectl
# dnf install bash-completion
$ echo 'source <(kubectl completion bash)' >>~/.bashrc
$ kubectl completion bash >/etc/bash_completion.d/kubectl
$ echo 'alias k=kubectl' >>~/.bashrc
$ echo 'complete -F __start_kubectl k' >>~/.bashrc
More info: https://kubernetes.io/docs/tasks/tools/install-kubectl/#enable-kubectl-autocompletion
$ echo 'source <(kubectl completion bash)' >>~/.bashrc
$ kubectl completion bash >/etc/bash_completion.d/kubectl
$ echo 'alias k=kubectl' >>~/.bashrc
$ echo 'complete -F __start_kubectl k' >>~/.bashrc
More info: https://kubernetes.io/docs/tasks/tools/install-kubectl/#enable-kubectl-autocompletion
Monday, October 21, 2019
Saturday, October 19, 2019
Kubernetes: Set a node as worker
1. Get the current roles
# kubectl get nodes
2. Set the new role
# kubectl label nodes ${NODE_NAME} node-role.kubernetes.io/worker=worker
3. See the new changes
# kubectl get nodes
# kubectl get nodes
2. Set the new role
# kubectl label nodes ${NODE_NAME} node-role.kubernetes.io/worker=worker
3. See the new changes
# kubectl get nodes
LVM Resize – How to increase or expand the logical volume
1. Oooops, we don't have space anymore!
# df -h
2. Look the current layout of LV and VG
# vgdisplay -m
# lvdisplay
3. Umount the path from lv will be removed
# umount /dev/cl/home
4. Remove lv
# lvremove /dev/cl/home
5. Remove VG
# vgremove cl
6. Now it's time to extend the /dev/mapper/centos-root
# lvextend -r -l +100%FREE /dev/mapper/centos-root
7. Take the extra space to /dev/centos/root (XFS)
# xfs_growfs /dev/centos/root
* For ext3/ext4 use: resize2fs /dev/centos/root
8. See th result
# df -h
# df -h
2. Look the current layout of LV and VG
# vgdisplay -m
# lvdisplay
3. Umount the path from lv will be removed
# umount /dev/cl/home
4. Remove lv
# lvremove /dev/cl/home
5. Remove VG
# vgremove cl
6. Now it's time to extend the /dev/mapper/centos-root
# lvextend -r -l +100%FREE /dev/mapper/centos-root
7. Take the extra space to /dev/centos/root (XFS)
# xfs_growfs /dev/centos/root
* For ext3/ext4 use: resize2fs /dev/centos/root
8. See th result
# df -h
tcpdump - capturing network packages from specific host (example)
Capturing
# tcpdump -n -i eth0 -s0 host 192.168.1.50 -w 0001.pcap
Reading
# tcpdump -r 0001.pcap
# tcpdump -n -i eth0 -s0 host 192.168.1.50 -w 0001.pcap
Reading
# tcpdump -r 0001.pcap
Thursday, October 17, 2019
Subscribe to:
Posts (Atom)