https://en.wikipedia.org/wiki/OpenID
https://en.wikipedia.org/wiki/OpenID_Connect
https://en.wikipedia.org/wiki/OAuth
https://en.wikipedia.org/wiki/OAuth#OAuth_2.0
Tuesday, December 31, 2019
java 11 certifications 1Z0-815 1Z0-816
https://education.oracle.com/oracle-certified-professional-java-se-11-developer/trackp_815
https://www.oracle.com/a/ocom/img/dc/ww-java11-programmer-study-guide.pdf?intcmp=WWOUCERTBLOGBBTYJK050219
https://javarevisited.blogspot.com/2019/07/top-4-java-11-certification-free-mock-exams-practice-tests-ocajp11-ocpjp11-1z0-815-16-questions.html
https://www.baeldung.com/java-9-modularity
unnamed module https://www.logicbig.com/tutorials/core-java-tutorial/modules/unnamed-modules.html
https://www.oracle.com/a/ocom/img/dc/ww-java11-programmer-study-guide.pdf?intcmp=WWOUCERTBLOGBBTYJK050219
https://javarevisited.blogspot.com/2019/07/top-4-java-11-certification-free-mock-exams-practice-tests-ocajp11-ocpjp11-1z0-815-16-questions.html
https://www.baeldung.com/java-9-modularity
unnamed module https://www.logicbig.com/tutorials/core-java-tutorial/modules/unnamed-modules.html
Labels:
1Z0-815,
1Z0-816,
certifications
Collections
https://dzone.com/articles/the-best-of-java-collections-tutorials
http://falkhausen.de/Java-10/java.util/Collection-Hierarchy.html hyerarchy
https://dzone.com/articles/a-deep-dive-into-collections
https://dzone.com/articles/the-developers-guide-to-collections-lists
https://dzone.com/articles/the-developers-guide-to-collections-sets
https://dzone.com/articles/the-developers-guide-to-collections-queues
http://falkhausen.de/Java-10/java.util/Collection-Hierarchy.html hyerarchy
https://dzone.com/articles/a-deep-dive-into-collections
https://dzone.com/articles/the-developers-guide-to-collections-lists
https://dzone.com/articles/the-developers-guide-to-collections-sets
https://dzone.com/articles/the-developers-guide-to-collections-queues
Labels:
collections,
java
Functional Interfaces
The Supplier interface is a generic functional interface with a single method that has no arguments and returns a value of a parameterized type.
The Consumer functional interface. Its single method takes a parameter and returns void.
The Function represents a function that accepts one argument and produces a result.
BiFunction represents a function that accepts two arguments and produces a result. This is the two-arity specialization of Function.
BinaryOperator Represents an operation upon two operands of the same type, producing a result of the same type as the operands.
(freely extracted from https://www.baeldung.com/java-completablefuture
and from https://www.baeldung.com/java-bifunction-interface
)
The Consumer functional interface. Its single method takes a parameter and returns void.
The Function represents a function that accepts one argument and produces a result.
BiFunction represents a function that accepts two arguments and produces a result. This is the two-arity specialization of Function.
BinaryOperator Represents an operation upon two operands of the same type, producing a result of the same type as the operands.
(freely extracted from https://www.baeldung.com/java-completablefuture
and from https://www.baeldung.com/java-bifunction-interface
)
Labels:
functional
Friday, December 27, 2019
LinkedBlockingQueue
import java.util.concurrent.LinkedBlockingQueue; public class MyLinkedBlockingQueue { public static void main(String[] args) throws InterruptedException { LinkedBlockingQueue linkedBlockingQueue = new LinkedBlockingQueue(); linkedBlockingQueue.put("PIPPO"); linkedBlockingQueue.put("PLUTO"); System.out.println(linkedBlockingQueue); Object bla1 = linkedBlockingQueue.peek(); System.out.println(bla1); Object bla2 = linkedBlockingQueue.take(); System.out.println(bla2); Object bla3 = linkedBlockingQueue.take(); System.out.println(bla3); Object bla4 = linkedBlockingQueue.take(); System.out.println(bla4); System.out.println(linkedBlockingQueue); } }
the output is:
[PIPPO, PLUTO]
PIPPO // this is the peek
PIPPO // this is the first take
PLUTO // this is the second take
and then at bla4 it blocks, because the Queue is empty...waiting for someone to put something into it...
https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/concurrent/LinkedBlockingQueue.html
Labels:
collections
Sunday, December 15, 2019
Collections
https://dzone.com/articles/the-best-of-java-collections-tutorials
http://falkhausen.de/Java-10/java.util/Collection-Hierarchy.html hyerarchy
https://dzone.com/articles/a-deep-dive-into-collections
https://dzone.com/articles/the-developers-guide-to-collections-lists
https://dzone.com/articles/the-developers-guide-to-collections-sets
https://dzone.com/articles/the-developers-guide-to-collections-queues
http://falkhausen.de/Java-10/java.util/Collection-Hierarchy.html hyerarchy
https://dzone.com/articles/a-deep-dive-into-collections
https://dzone.com/articles/the-developers-guide-to-collections-lists
https://dzone.com/articles/the-developers-guide-to-collections-sets
https://dzone.com/articles/the-developers-guide-to-collections-queues
Labels:
collections
Thursday, November 28, 2019
memory effectively used by a Java process
cat /proc/102433/smaps | grep -i pss | awk '{Total+=$2} END {print Total/1024" MB"}'
499.514 MB
cat /proc/102433/smaps | grep -i rss | awk '{Total+=$2} END {print Total/1024" MB"}'
506.32 MB
jmap
pmap
To learn more about smaps, read here https://jameshunt.us/writings/smaps.html
499.514 MB
cat /proc/102433/smaps | grep -i rss | awk '{Total+=$2} END {print Total/1024" MB"}'
506.32 MB
jmap
pmap
To learn more about smaps, read here https://jameshunt.us/writings/smaps.html
Tuesday, November 19, 2019
Saturday, November 16, 2019
rats, in OracleXE DB a create user no longer works unless....
create user PIPPO IDENTIFIED BY PIPPO
googling around I find that you must preceed the commmand with:
alter session set "_ORACLE_SCRIPT"=true;
create user PIPPO IDENTIFIED BY PIPPO;
grant create session to PIPPO;
GRANT CONNECT, RESOURCE, CREATE VIEW TO PIPPO;
GRANT ALL PRIVILEGES to PIPPO;
create table USERS (id NUMBER GENERATED BY DEFAULT AS IDENTITY, name varchar(100) );
This is simply ridiculous.
Fehlerbericht - ORA-65096: Ungültiger allgemeiner Benutzer- oder Rollenname 65096. 00000 - "invalid common user or role name" *Cause: An attempt was made to create a common user or role with a name that was not valid for common users or roles. In addition to the usual rules for user and role names, common user and role names must start with C## or c## and consist only of ASCII characters. *Action: Specify a valid common user or role name.
googling around I find that you must preceed the commmand with:
alter session set "_ORACLE_SCRIPT"=true;
create user PIPPO IDENTIFIED BY PIPPO;
grant create session to PIPPO;
GRANT CONNECT, RESOURCE, CREATE VIEW TO PIPPO;
GRANT ALL PRIVILEGES to PIPPO;
create table USERS (id NUMBER GENERATED BY DEFAULT AS IDENTITY, name varchar(100) );
This is simply ridiculous.
Labels:
oracledb
Thursday, November 14, 2019
creating a domain with WLST
in WLS 12.1 + the config.sh script no longer supports a silent installation
./config.sh -silent -response response_file
see Document "Can config.sh be Run in Silent or Non-GUI Mode in Weblogic 12c ? (Doc ID 2370636.1) "
"The configuration tool only works in GUI mode with 12.1.x and above. In order to create a domain without using a GUI the WLST tool will need to be used. "
this is really stupid, 99% of Linux machines are headless
and using UI defeats automation
here how to create a domain with WLST:
https://docs.oracle.com/middleware/1221/wls/WLSTG/domains.htm#WLSTG429
but when you do:
selectTemplate('Base WebLogic Server Domain')
you get a
Caused by: com.oracle.cie.domain.script.ScriptException: 60708: Template not found. 60708: Found no templates with Base WebLogic Server Domain name and null version 60708: Select a different template. at com.oracle.cie.domain.script.ScriptExecutor.selectTemplate(ScriptExecutor.java:556) at com.oracle.cie.domain.script.jython.WLScriptContext.selectTemplate(WLScriptContext.java:580)
do a showAvailableTemplates()
'20849: Available templates.\n20849: Currently available templates for loading: WebLogic Advanced Web Services for JAX-RPC Extension:12.2.1.3.0\nWebLogic Advanced Web Services for JAX-WS Extension:12.2.1.3.0\nWebLogic JAX-WS SOAP/JMS Extension:12.2.1.3.0\nBasic WebLogic Server Domain:12.2.1.3.0\nWebLogic Coherence Cluster Extension:12.2.1.3.0\n\n20849: No action required.\n'
so the solution is
selectTemplate('Basic WebLogic Server Domain', '12.2.1.3.0')
To find out more info on the domain template (like wls.jar) open its template-info.xml
Saturday, November 9, 2019
kubernetes quotas
kubectl create namespace pippo
kubectl create quota myhq --hard=cpu=1,memory=1G,pods=2 --namespace=pippo
kubectl run --restart=Never busybox --image=busybox --namespace=pippo
Error from server (Forbidden): pods "busybox" is forbidden: failed quota: myhq: must specify cpu,memory
You can create your pod with requests and limits:
kubectl run --restart=Never busybox --image=busybox --namespace=pippo --limits=cpu=100m,memory=512Mi --requests=cpu=50m,memory=256Mi --dry-run -o yaml > mypod.yaml
obviously all values in "requests" must be <= values in limits
kubectl create quota myhq --hard=cpu=1,memory=1G,pods=2 --namespace=pippo
kubectl run --restart=Never busybox --image=busybox --namespace=pippo
Error from server (Forbidden): pods "busybox" is forbidden: failed quota: myhq: must specify cpu,memory
You can create your pod with requests and limits:
kubectl run --restart=Never busybox --image=busybox --namespace=pippo --limits=cpu=100m,memory=512Mi --requests=cpu=50m,memory=256Mi --dry-run -o yaml > mypod.yaml
spec: containers: - image: busybox imagePullPolicy: IfNotPresent name: busybox resources: limits: cpu: 100m memory: 512Mi requests: cpu: 50m memory: 256Mi
obviously all values in "requests" must be <= values in limits
Labels:
kubernetes,
quota
busybox docker image
https://hub.docker.com/_/busybox/
docker pull busybox
docker inspect busybox
if I see in the Config section (don't look at ContainerConfig ) the Cmd is simply "sh"
which means that if I do
kubectl run --restart=Never busybox -it --image=busybox
I simply get a shell.
If I create a pod who executes simply "env"
kubectl run --restart=Never busybox --image=busybox -- env
and then "inspect" it
kubectl get pod busybox -o yaml
I see that the container got the entry:
and since the "args" are simply appended to the default command defined in the container ("If you define args, but do not define a command, the default command is used with your new arguments.")... it's equivalent to specifying:
don't forget the "-c", otherwise you get "sh: can't open 'env': No such file or directory"
NB: if with "kubectl run" you specify --command
kubectl run busybox --image=busybox --restart=Never --command -- env
then "command" instead of "args" will be generated:
This aspect of containers/pods (how to run a command with arguments) is rather confusing and poorly engineered.
docker pull busybox
docker inspect busybox
if I see in the Config section (don't look at ContainerConfig ) the Cmd is simply "sh"
which means that if I do
kubectl run --restart=Never busybox -it --image=busybox
I simply get a shell.
If I create a pod who executes simply "env"
kubectl run --restart=Never busybox --image=busybox -- env
and then "inspect" it
kubectl get pod busybox -o yaml
I see that the container got the entry:
args: - env
and since the "args" are simply appended to the default command defined in the container ("If you define args, but do not define a command, the default command is used with your new arguments.")... it's equivalent to specifying:
command: ["sh", "-c"] args: ["env"]
don't forget the "-c", otherwise you get "sh: can't open 'env': No such file or directory"
NB: if with "kubectl run" you specify --command
kubectl run busybox --image=busybox --restart=Never --command -- env
then "command" instead of "args" will be generated:
spec: containers: - command: - env image: busybox name: busybox
This aspect of containers/pods (how to run a command with arguments) is rather confusing and poorly engineered.
Thursday, November 7, 2019
JPA Parent child relationship
https://vladmihalcea.com/the-best-way-to-map-a-onetomany-association-with-jpa-and-hibernate/
a quick reminder to myself:
if Post -> PostComment is a Unidirectional association (Post has a List<PostComments; but PostComment has no reference to Post),
then the insertion of each PostComment is done in two phases (INSERT + UPDATE who sets the parentId on the child table).
For this reason, the parentId column CANNOT BE DECLARED AS NOT NULL.
I don't like that.... ultimately that column should NEVER be null...
On the other hand, if you setup a full bidirectional relationship, only ONE insert is done, and you can keep parentId as NOT NULL.
To achieve that, you must explicitely set the Post attribute on the PostComment entity, before you persist/merge (I think, at least)
a quick reminder to myself:
if Post -> PostComment is a Unidirectional association (Post has a List<PostComments; but PostComment has no reference to Post),
then the insertion of each PostComment is done in two phases (INSERT + UPDATE who sets the parentId on the child table).
For this reason, the parentId column CANNOT BE DECLARED AS NOT NULL.
I don't like that.... ultimately that column should NEVER be null...
On the other hand, if you setup a full bidirectional relationship, only ONE insert is done, and you can keep parentId as NOT NULL.
To achieve that, you must explicitely set the Post attribute on the PostComment entity, before you persist/merge (I think, at least)
Labels:
JPA
Monday, November 4, 2019
CKAD CNCF Kubernetes Certification
Today I have attempted CKAD certification test.
It was TOUGH, but not impossible, actually I was expecting worse in term of complexity. And pass threshold is 66%, which is mild.
Good thing is that each of the 19 questions is self-contained, so you can easily skip it and try later.
This fellow says already everything on the topic:
https://medium.com/@nassim.kebbani/how-to-beat-kubernetes-ckad-certification-c84bff8d61b1
I would say:
"Kubernetes in Action" book is surely very complete (I have ready it 3 times) but not really preparing you for the test. Too verbose IMHO.
MUST 1: Take the Udemy course "https://www.udemy.com/course/certified-kubernetes-application-developer/", definitely excellent and very hands on.
MUST 2: and do at least 3 times all the exercises here https://github.com/dgkanatsios/CKAD-exercises
then go for the test. You might fail at the first attempt (time pressure is very high!) but it will make you stronger. You have a free retake, so no worries, lick your wounds and try again a couple of weeks later.
CAVEAT CANEM: many exercises entail usage of namespaces - remember to practice usage of namespaces.
IMPORTANT: if you google around for CKAD VOUCHER or CKAD DISCOUNT you should find some magic words to get a 20% rebate on the exam (normally 300 USD.... I got it for 250 or so).
It was TOUGH, but not impossible, actually I was expecting worse in term of complexity. And pass threshold is 66%, which is mild.
Good thing is that each of the 19 questions is self-contained, so you can easily skip it and try later.
This fellow says already everything on the topic:
https://medium.com/@nassim.kebbani/how-to-beat-kubernetes-ckad-certification-c84bff8d61b1
I would say:
"Kubernetes in Action" book is surely very complete (I have ready it 3 times) but not really preparing you for the test. Too verbose IMHO.
MUST 1: Take the Udemy course "https://www.udemy.com/course/certified-kubernetes-application-developer/", definitely excellent and very hands on.
MUST 2: and do at least 3 times all the exercises here https://github.com/dgkanatsios/CKAD-exercises
then go for the test. You might fail at the first attempt (time pressure is very high!) but it will make you stronger. You have a free retake, so no worries, lick your wounds and try again a couple of weeks later.
CAVEAT CANEM: many exercises entail usage of namespaces - remember to practice usage of namespaces.
IMPORTANT: if you google around for CKAD VOUCHER or CKAD DISCOUNT you should find some magic words to get a 20% rebate on the exam (normally 300 USD.... I got it for 250 or so).
Labels:
ckad,
kubernetes
Thursday, October 17, 2019
Mocking rest services with Mockito and MockRestServiceServer in Spring
https://www.baeldung.com/spring-mock-rest-template
the code in the article is broken, here https://github.com/vernetto/mockrest you find a fixed version.
It's very simple and elegant solution.
You see @InjectMocks and @Mock annotations in action. Here they are explained https://www.baeldung.com/mockito-annotations
Here the javadoc of https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/test/web/client/MockRestServiceServer.html
the code in the article is broken, here https://github.com/vernetto/mockrest you find a fixed version.
It's very simple and elegant solution.
You see @InjectMocks and @Mock annotations in action. Here they are explained https://www.baeldung.com/mockito-annotations
Here the javadoc of https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/test/web/client/MockRestServiceServer.html
Tuesday, October 15, 2019
kubernetes "change-cause" to describe a deployment
$ kubectl run nginx --image=nginx --replicas=4
$ kubectl annotate deployment/nginx kubernetes.io/change-cause='initial deployment'
deployment.extensions/nginx annotated
$ kubectl set image deploy nginx nginx=nginx:1.7.9
$ kubectl annotate deployment/nginx kubernetes.io/change-cause='nginx:1.7.9'
deployment.extensions/nginx annotated
$ kubectl set image deploy nginx nginx=nginx:1.9.1
$ kubectl annotate deployment/nginx kubernetes.io/change-cause='nginx:1.9.1'
deployment.extensions/nginx annotated
$ kubectl rollout history deploy nginx
deployment.extensions/nginx
REVISION CHANGE-CAUSE
5 initial deployment
6 nginx:1.7.9
7 nginx:1.9.1
This seems to me a very good practice, to be able to trace all changes in PROD.
You can always trace what changed:
kubectl rollout history deploy nginx --revision=6
$ kubectl annotate deployment/nginx kubernetes.io/change-cause='initial deployment'
deployment.extensions/nginx annotated
$ kubectl set image deploy nginx nginx=nginx:1.7.9
$ kubectl annotate deployment/nginx kubernetes.io/change-cause='nginx:1.7.9'
deployment.extensions/nginx annotated
$ kubectl set image deploy nginx nginx=nginx:1.9.1
$ kubectl annotate deployment/nginx kubernetes.io/change-cause='nginx:1.9.1'
deployment.extensions/nginx annotated
$ kubectl rollout history deploy nginx
deployment.extensions/nginx
REVISION CHANGE-CAUSE
5 initial deployment
6 nginx:1.7.9
7 nginx:1.9.1
This seems to me a very good practice, to be able to trace all changes in PROD.
You can always trace what changed:
kubectl rollout history deploy nginx --revision=6
deployment.extensions/nginx with revision #6 Pod Template: Labels: pod-template-hash=7b74859c78 run=nginx Containers: nginx: Image: nginx:1.7.9 Port:Host Port: Environment: Mounts: Volumes:
Labels:
kubernetes
joy of Openshift SCC
if you do
oc describe project
you will see 2 annotations
openshift.io/sa.scc.supplemental-groups=1000800000/10000
openshift.io/sa.scc.uid-range=1000800000/10000
Even if you specify a "USER 10001" in your Dockerfile, your actual uid will be remapped withing the range specified by those 2 annotations (the second parameter "/10000" is the block length! meaning that you can have 10000 different users starting from uid 1000800000 ) :
and in order for this new user to be a first class citizen in your Linux, you must run a uid_entrypoint script to append it to /etc/passwd
for more details:
https://docs.openshift.com/enterprise/3.1/architecture/additional_concepts/authorization.html
https://docs.openshift.com/container-platform/3.11/creating_images/guidelines.html#openshift-specific-guidelines
oc describe project
you will see 2 annotations
openshift.io/sa.scc.supplemental-groups=1000800000/10000
openshift.io/sa.scc.uid-range=1000800000/10000
Even if you specify a "USER 10001" in your Dockerfile, your actual uid will be remapped withing the range specified by those 2 annotations (the second parameter "/10000" is the block length! meaning that you can have 10000 different users starting from uid 1000800000 ) :
sh-4.2$ id uid=1000800000(root) gid=0(root) groups=0(root),1000800000 sh-4.2$ id root uid=0(root) gid=0(root) groups=0(root)
and in order for this new user to be a first class citizen in your Linux, you must run a uid_entrypoint script to append it to /etc/passwd
for more details:
https://docs.openshift.com/enterprise/3.1/architecture/additional_concepts/authorization.html
https://docs.openshift.com/container-platform/3.11/creating_images/guidelines.html#openshift-specific-guidelines
Labels:
openshift
Monday, October 7, 2019
kubernetes mount file on an existing folder
With ConfigMap and Secret you can "populate" a volume with files and "mount" that volume to a container, so that the application can access those files.
echo "one=1" > file1.properties
echo "two=2" > file2.properties
kubectl create configmap myconfig --from-file file1.properties --from-file file2.properties
kubectl describe configmaps myconfig
Now I can mount the ConfigMap into a Pod, as described here
cat mypod.yml
kubectl create -f mypod.yml
kubectl exec -ti configmap-pod bash
cat /etc/config/myfile1.properties
one=1
Now I change the image to vernetto/mynginx, which contains already a /etc/config/file0.properties
The existing folder /etc/config/ is completely replaced by the volumeMount, so file0.properties disappears!
Only /etc/config/file1.properties is there.
They claim that one can selectively mount only one file from the volume, and leave the original files in the base image:
https://stackoverflow.com/questions/33415913/whats-the-best-way-to-share-mount-one-file-into-a-pod/43404857#43404857 using subPath, but it is definitely not working for me.
echo "one=1" > file1.properties
echo "two=2" > file2.properties
kubectl create configmap myconfig --from-file file1.properties --from-file file2.properties
kubectl describe configmaps myconfig
Name: myconfig Namespace: default Labels:Annotations: Data ==== file1.properties: ---- one=1 file2.properties: ---- two=2 Events:
Now I can mount the ConfigMap into a Pod, as described here
cat mypod.yml
apiVersion: v1 kind: Pod metadata: name: configmap-pod spec: containers: - name: test image: nginx volumeMounts: - name: config-vol mountPath: /etc/config volumes: - name: config-vol configMap: name: myconfig items: - key: file1.properties path: myfile1.properties
kubectl create -f mypod.yml
kubectl exec -ti configmap-pod bash
cat /etc/config/myfile1.properties
one=1
Now I change the image to vernetto/mynginx, which contains already a /etc/config/file0.properties
The existing folder /etc/config/ is completely replaced by the volumeMount, so file0.properties disappears!
Only /etc/config/file1.properties is there.
They claim that one can selectively mount only one file from the volume, and leave the original files in the base image:
https://stackoverflow.com/questions/33415913/whats-the-best-way-to-share-mount-one-file-into-a-pod/43404857#43404857 using subPath, but it is definitely not working for me.
Labels:
configmap,
k8s,
kubernetes
Friday, October 4, 2019
Cisco CCNA 200-125
GNS3 https://gns3.com/
https://github.com/GNS3/gns3-gui/releases/download/v2.0.3/GNS3-2.0.3-all-in-one.exe
install 2.0.3 !
you can download IOS image here https://srijit.com/working-cisco-ios-gns3/ (download 7200 and 3745 )
in GNS3, go to Edit/Preferences/Dynamips/IOS routers/
also, GNS3 is a spiteful beast, I managed to make it work only by copying the "bin" files to C:\Users\Pierre-Luigi\GNS3\images\IOS and by running gns3server.exe in a cmd dos prompt. What a piece of crap!
Otherwise it will tell you
GNS3 allows you to construct and test networks in a risk-free virtual environment without the need for network hardware
CISCO packet-tracer https://www.itechtics.com/packet-tracer-download/ "a powerful network simulation software from Cisco Network Academy which can simulate/create a network without having a physical network" (with Netacademy you can take free course on Packet Tracer)
and download Packet Tracer here https://www.netacad.com/portal/resources/packet-tracer
Switch
Router
Firewall
Local Area Network
Wide Area Network
OSI Model (ISO standard): Application, Presentation, Session, Transport (Port), Network (IP -> ROUTER)), Datalink (MAC -> SWITCH), Physical
please do not throw sausage pizza away
TCP-IP stack: Application, Transport. Internet, Network access
PDU (protocol data unit): data, segment, packet, frame
Network Calculator http://jodies.de/ipcalc?host=200.15.10.1&mask1=27&mask2=
http://subnettingquestions.com/
http://subnetting.org
https://www.cisco.com/c/en/us/support/docs/ip/enhanced-interior-gateway-routing-protocol-eigrp/16406-eigrp-toc.html EIGRP
https://github.com/GNS3/gns3-gui/releases/download/v2.0.3/GNS3-2.0.3-all-in-one.exe
install 2.0.3 !
you can download IOS image here https://srijit.com/working-cisco-ios-gns3/ (download 7200 and 3745 )
in GNS3, go to Edit/Preferences/Dynamips/IOS routers/
also, GNS3 is a spiteful beast, I managed to make it work only by copying the "bin" files to C:\Users\Pierre-Luigi\GNS3\images\IOS and by running gns3server.exe in a cmd dos prompt. What a piece of crap!
Otherwise it will tell you
"Could not create IOS router: Error while setting up node: S:/pierre/downloads/c7200-advipservicesk9-mz.152-4.S5.bin is not allowed on this remote server. Please use only a filename in C:\Users\Pierre-Luigi\GNS3\images\IOS. error while deleting : Node ID 4f03d98e-bcca-4623-a2dc-9d5095eefb64 doesn't exist Could not create IOS router: Node ID 4f03d98e-bcca-4623-a2dc-9d5095eefb64 doesn't exist "
GNS3 allows you to construct and test networks in a risk-free virtual environment without the need for network hardware
CISCO packet-tracer https://www.itechtics.com/packet-tracer-download/ "a powerful network simulation software from Cisco Network Academy which can simulate/create a network without having a physical network" (with Netacademy you can take free course on Packet Tracer)
and download Packet Tracer here https://www.netacad.com/portal/resources/packet-tracer
Switch
Router
Firewall
Local Area Network
Wide Area Network
OSI Model (ISO standard): Application, Presentation, Session, Transport (Port), Network (IP -> ROUTER)), Datalink (MAC -> SWITCH), Physical
please do not throw sausage pizza away
TCP-IP stack: Application, Transport. Internet, Network access
PDU (protocol data unit): data, segment, packet, frame
Network Calculator http://jodies.de/ipcalc?host=200.15.10.1&mask1=27&mask2=
http://subnettingquestions.com/
http://subnetting.org
https://www.cisco.com/c/en/us/support/docs/ip/enhanced-interior-gateway-routing-protocol-eigrp/16406-eigrp-toc.html EIGRP
Monday, September 30, 2019
beauty of Spring Data
thanks to Greg Turnquist
https://www.slideshare.net/SpringCentral/introduction-to-spring-data
https://spring.io/guides/ type "data"
my code is available here https://github.com/vernetto/springdata
Labels:
Spring,
springdata
Saturday, September 21, 2019
Istio presentation by Burr Sutter
Presentation slides http://bit.ly/istio-canaries
and the code is here https://github.com/redhat-developer-demos/istio-tutorial
What I understand is that Istio provides you a central control to monitor and manage routes among services - based on iptables rather than software proxies.
plenty of labs here https://github.com/redhat-developer-demos/istio-tutorial
So Istio should completely replace Netflix Cloud stuff (hystrix, sleuth, Service Registry...) - if I understand correctly.
and the code is here https://github.com/redhat-developer-demos/istio-tutorial
What I understand is that Istio provides you a central control to monitor and manage routes among services - based on iptables rather than software proxies.
plenty of labs here https://github.com/redhat-developer-demos/istio-tutorial
So Istio should completely replace Netflix Cloud stuff (hystrix, sleuth, Service Registry...) - if I understand correctly.
Labels:
istio
Thursday, September 19, 2019
REST Management Services in weblogic reverse engineered
wls-management-services.war
only Administrator and Operator can invoke
weblogic.management.rest.Application main entry point
weblogic.management.rest.bean.utils.load.BuiltinResourceInitializer : all the MBeans are loaded here
weblogic.management.runtime.ServerRuntimeMBean
weblogic.management.rest.wls.resources.server.ShutdownServerResource this is the REST endpoint
@POST
@Produces({"application/json"})
public Response shutdownServer(@QueryParam("__detached") @DefaultValue("false") boolean detached, @QueryParam("force") @DefaultValue("false") boolean force, @PathParam("server") String name) throws Exception {
return this.getJobResponse(name, ServerOperationUtils.shutdown(this.getRequest(), name, detached, force), new ShutdownJobMessages(this));
}
weblogic.management.rest.wls.utils.ServerOperationUtils
From MBean:
com.bea.console.actions.core.server.lifecycle.Lifecycle$AdminServerShutdownJob
http://localhost:7001/console/jsp/core/server/lifecycle/ConsoleShutdown.jsp
weblogic.t3.srvr.GracefulShutdownRequest
weblogic.t3.srvr.ServerGracefulShutdownTimer
via JMX:
weblogic.management.mbeanservers.runtime.RuntimeServiceMBean extends Service : String OBJECT_NAME = "com.bea:Name=RuntimeService,Type=weblogic.management.mbeanservers.runtime.RuntimeServiceMBean"
public interface ServerRuntimeMBean extends RuntimeMBean, HealthFeedback, ServerStates, ServerRuntimeSecurityAccess
void shutdown(int var1, boolean var2, boolean var3) throws ServerLifecycleException;
https://docs.oracle.com/middleware/1221/wls/WLAPI/weblogic/management/runtime/ServerRuntimeMBean.html#shutdown_int__boolean__boolean_
For a list of REST Examples see also https://docs.oracle.com/middleware/12212/wls/WLRUR/WLRUR.pdf
----------------------------------------------------------------------
Asynchronously force shutdown a server
----------------------------------------------------------------------
curl -v \
--user operator:operator123 \
-H X-Requested-By:MyClient \
-H Accept:application/json \
-H Content-Type:application/json \
-d "{}" \
-H "Prefer:respond-async" \
-X POST http://localhost:7001/management/weblogic/latest/domainRuntime/
serverLifeCycleRuntimes/Cluster1Server2/forceShutdown
HTTP/1.1 202 Accepted
Location: http://localhost:7001/management/weblogic/latest/domainRuntime/
serverLifeCycleRuntimes/Cluster1Server2/tasks/_3_forceShutdown
only Administrator and Operator can invoke
weblogic.management.rest.Application main entry point
weblogic.management.rest.bean.utils.load.BuiltinResourceInitializer : all the MBeans are loaded here
weblogic.management.runtime.ServerRuntimeMBean
weblogic.management.rest.wls.resources.server.ShutdownServerResource this is the REST endpoint
@POST
@Produces({"application/json"})
public Response shutdownServer(@QueryParam("__detached") @DefaultValue("false") boolean detached, @QueryParam("force") @DefaultValue("false") boolean force, @PathParam("server") String name) throws Exception {
return this.getJobResponse(name, ServerOperationUtils.shutdown(this.getRequest(), name, detached, force), new ShutdownJobMessages(this));
}
weblogic.management.rest.wls.utils.ServerOperationUtils
From MBean:
com.bea.console.actions.core.server.lifecycle.Lifecycle$AdminServerShutdownJob
http://localhost:7001/console/jsp/core/server/lifecycle/ConsoleShutdown.jsp
weblogic.t3.srvr.GracefulShutdownRequest
weblogic.t3.srvr.ServerGracefulShutdownTimer
via JMX:
weblogic.management.mbeanservers.runtime.RuntimeServiceMBean extends Service : String OBJECT_NAME = "com.bea:Name=RuntimeService,Type=weblogic.management.mbeanservers.runtime.RuntimeServiceMBean"
public interface ServerRuntimeMBean extends RuntimeMBean, HealthFeedback, ServerStates, ServerRuntimeSecurityAccess
void shutdown(int var1, boolean var2, boolean var3) throws ServerLifecycleException;
https://docs.oracle.com/middleware/1221/wls/WLAPI/weblogic/management/runtime/ServerRuntimeMBean.html#shutdown_int__boolean__boolean_
For a list of REST Examples see also https://docs.oracle.com/middleware/12212/wls/WLRUR/WLRUR.pdf
----------------------------------------------------------------------
Asynchronously force shutdown a server
----------------------------------------------------------------------
curl -v \
--user operator:operator123 \
-H X-Requested-By:MyClient \
-H Accept:application/json \
-H Content-Type:application/json \
-d "{}" \
-H "Prefer:respond-async" \
-X POST http://localhost:7001/management/weblogic/latest/domainRuntime/
serverLifeCycleRuntimes/Cluster1Server2/forceShutdown
HTTP/1.1 202 Accepted
Location: http://localhost:7001/management/weblogic/latest/domainRuntime/
serverLifeCycleRuntimes/Cluster1Server2/tasks/_3_forceShutdown
Labels:
weblogic
Wednesday, September 18, 2019
Container PID 1
PRICELESS article on PID one, SIGTERM and kill in containers:
https://blog.no42.org/code/docker-java-signals-pid1/
the trick is using "exec java bla" so that java becomes PID 1.
$ kill -l
1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP
6) SIGABRT 7) SIGEMT 8) SIGFPE 9) SIGKILL 10) SIGBUS
11) SIGSEGV 12) SIGSYS 13) SIGPIPE 14) SIGALRM 15) SIGTERM
16) SIGURG 17) SIGSTOP 18) SIGTSTP 19) SIGCONT 20) SIGCHLD
21) SIGTTIN 22) SIGTTOU 23) SIGIO 24) SIGXCPU 25) SIGXFSZ
26) SIGVTALRM 27) SIGPROF 28) SIGWINCH 29) SIGPWR 30) SIGUSR1
31) SIGUSR2 32) SIGRTMIN 33) SIGRTMIN+1 34) SIGRTMIN+2 35) SIGRTMIN+3
36) SIGRTMIN+4 37) SIGRTMIN+5 38) SIGRTMIN+6 39) SIGRTMIN+7 40) SIGRTMIN+8
41) SIGRTMIN+9 42) SIGRTMIN+10 43) SIGRTMIN+11 44) SIGRTMIN+12 45) SIGRTMIN+13
46) SIGRTMIN+14 47) SIGRTMIN+15 48) SIGRTMIN+16 49) SIGRTMAX-15 50) SIGRTMAX-14
51) SIGRTMAX-13 52) SIGRTMAX-12 53) SIGRTMAX-11 54) SIGRTMAX-10 55) SIGRTMAX-9
56) SIGRTMAX-8 57) SIGRTMAX-7 58) SIGRTMAX-6 59) SIGRTMAX-5 60) SIGRTMAX-4
61) SIGRTMAX-3 62) SIGRTMAX-2 63) SIGRTMAX-1 64) SIGRTMAX
"The SIGTERM signal is a generic signal used to cause program termination. Unlike SIGKILL, this signal can be blocked, handled, and ignored. It is the normal way to politely ask a program to terminate."
See also https://docs.docker.com/v17.12/engine/reference/run/#specify-an-init-process
"You can use the --init flag to indicate that an init process should be used as the PID 1 in the container. Specifying an init process ensures the usual responsibilities of an init system, such as reaping zombie processes, are performed inside the created container."
https://github.com/krallin/tini "All Tini does is spawn a single child (Tini is meant to be run in a container), and wait for it to exit all the while reaping zombies and performing signal forwarding." "Tini is included in Docker itself"
"A process running as PID 1 inside a container is treated specially by Linux: it ignores any signal with the default action. So, the process will not terminate on SIGINT or SIGTERM unless it is coded to do so."
https://blog.no42.org/code/docker-java-signals-pid1/
the trick is using "exec java bla" so that java becomes PID 1.
$ kill -l
1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP
6) SIGABRT 7) SIGEMT 8) SIGFPE 9) SIGKILL 10) SIGBUS
11) SIGSEGV 12) SIGSYS 13) SIGPIPE 14) SIGALRM 15) SIGTERM
16) SIGURG 17) SIGSTOP 18) SIGTSTP 19) SIGCONT 20) SIGCHLD
21) SIGTTIN 22) SIGTTOU 23) SIGIO 24) SIGXCPU 25) SIGXFSZ
26) SIGVTALRM 27) SIGPROF 28) SIGWINCH 29) SIGPWR 30) SIGUSR1
31) SIGUSR2 32) SIGRTMIN 33) SIGRTMIN+1 34) SIGRTMIN+2 35) SIGRTMIN+3
36) SIGRTMIN+4 37) SIGRTMIN+5 38) SIGRTMIN+6 39) SIGRTMIN+7 40) SIGRTMIN+8
41) SIGRTMIN+9 42) SIGRTMIN+10 43) SIGRTMIN+11 44) SIGRTMIN+12 45) SIGRTMIN+13
46) SIGRTMIN+14 47) SIGRTMIN+15 48) SIGRTMIN+16 49) SIGRTMAX-15 50) SIGRTMAX-14
51) SIGRTMAX-13 52) SIGRTMAX-12 53) SIGRTMAX-11 54) SIGRTMAX-10 55) SIGRTMAX-9
56) SIGRTMAX-8 57) SIGRTMAX-7 58) SIGRTMAX-6 59) SIGRTMAX-5 60) SIGRTMAX-4
61) SIGRTMAX-3 62) SIGRTMAX-2 63) SIGRTMAX-1 64) SIGRTMAX
"The SIGTERM signal is a generic signal used to cause program termination. Unlike SIGKILL, this signal can be blocked, handled, and ignored. It is the normal way to politely ask a program to terminate."
See also https://docs.docker.com/v17.12/engine/reference/run/#specify-an-init-process
"You can use the --init flag to indicate that an init process should be used as the PID 1 in the container. Specifying an init process ensures the usual responsibilities of an init system, such as reaping zombie processes, are performed inside the created container."
https://github.com/krallin/tini "All Tini does is spawn a single child (Tini is meant to be run in a container), and wait for it to exit all the while reaping zombies and performing signal forwarding." "Tini is included in Docker itself"
"A process running as PID 1 inside a container is treated specially by Linux: it ignores any signal with the default action. So, the process will not terminate on SIGINT or SIGTERM unless it is coded to do so."
Tuesday, September 17, 2019
REST interface to manage WLS
this works like magic:
curl -s -v --user weblogic:weblogic0 -H X-Requested-By:MyClient -H Accept:application/json -H Content-Type:application/json -d "{timeout: 10, ignoreSessions: true }" -X POST http://localhost:7001/management/wls/latest/servers/id/AdminServer/shutdown
Problem comes when you have only HTTPS, and even worse with 2 way SSL. Then you are screwed - pardon my french - because curl stupidly uses only pem certificates, so if you have p12 you must convert the p12 into 2 separate files, certificate and private key :
openssl pkcs12 -in mycert.p12 -out file.key.pem -nocerts -nodes
openssl pkcs12 -in mycert.p12 -out file.crt.pem -clcerts -nokeys
curl -E ./file.crt.pem --key ./file.key.pem https://myservice.com/service?wsdl
CORRECTION: it seems that CURL does support now p12 certs: curl --cert-type P12 ...https://curl.haxx.se/docs/manpage.html BUT only if you use the Apple Library "Secure Support" or something like that, not if you use NSS or OpenSSL libraries (do "curl -V" to find out)
See more here https://docs.oracle.com/middleware/1221/wls/WLRUR/using.htm#WLRUR180
return all servers:
curl -s --user weblogic:weblogic0 http://localhost:7001/management/weblogic/latest/edit/servers
curl -s -v --user weblogic:weblogic0 -H X-Requested-By:MyClient -H Accept:application/json -H Content-Type:application/json -d "{timeout: 10, ignoreSessions: true }" -X POST http://localhost:7001/management/wls/latest/servers/id/AdminServer/shutdown
Problem comes when you have only HTTPS, and even worse with 2 way SSL. Then you are screwed - pardon my french - because curl stupidly uses only pem certificates, so if you have p12 you must convert the p12 into 2 separate files, certificate and private key :
openssl pkcs12 -in mycert.p12 -out file.key.pem -nocerts -nodes
openssl pkcs12 -in mycert.p12 -out file.crt.pem -clcerts -nokeys
curl -E ./file.crt.pem --key ./file.key.pem https://myservice.com/service?wsdl
CORRECTION: it seems that CURL does support now p12 certs: curl --cert-type P12 ...https://curl.haxx.se/docs/manpage.html BUT only if you use the Apple Library "Secure Support" or something like that, not if you use NSS or OpenSSL libraries (do "curl -V" to find out)
See more here https://docs.oracle.com/middleware/1221/wls/WLRUR/using.htm#WLRUR180
return all servers:
curl -s --user weblogic:weblogic0 http://localhost:7001/management/weblogic/latest/edit/servers
Labels:
weblogic
Saturday, September 14, 2019
Sockets leak, a case study
We get a "too many files open", and lsof reveals some 40k IPv6 connections.
What happens if you forget to close a Socket ?
you end up with a pile of socket file descriptors in FIN_WAIT2+CLOSE_WAIT
If instead you DO close the socket, you have only fd in TIME_WAIT
In the LEAKING case, you can observe the leaking fd:
sudo lsof -p 10779
A Java Flight Recorder measurement reveals the source of the leak:
You can view those sockets in /proc/10779/fd/ folder (use ls -ltra, they are links)
and display more info with
cat /proc/10779/net/sockstat
(see also https://www.cyberciti.biz/faq/linux-find-all-file-descriptors-used-by-a-process/ for procfs info )
What happens if you forget to close a Socket ?
import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.net.Socket; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.net.Socket; public class ConnLeak { public static void main(String[] args) throws InterruptedException, IOException { for (;;) { Thread.sleep(1000); leak(); } } static void leak() throws IOException { System.out.println("connecting "); String hostName = "localhost"; int portNumber = 8080; Socket echoSocket = new Socket(hostName, portNumber); BufferedReader in = new BufferedReader(new InputStreamReader(echoSocket.getInputStream())); System.out.println(in.readLine()); // in.close(); // WE FORGOT TO CLOSE! // echoSocket.close(); // WE FORGOT TO CLOSE! System.out.println("done"); } }
centos@localhost ~]$ netstat -an | grep WAIT tcp6 0 0 127.0.0.1:8080 127.0.0.1:50008 FIN_WAIT2 tcp6 0 0 127.0.0.1:8080 127.0.0.1:50020 FIN_WAIT2 tcp6 0 0 127.0.0.1:8080 127.0.0.1:50016 FIN_WAIT2 tcp6 0 0 127.0.0.1:50014 127.0.0.1:8080 CLOSE_WAIT tcp6 0 0 127.0.0.1:8080 127.0.0.1:50006 FIN_WAIT2 tcp6 0 0 127.0.0.1:8080 127.0.0.1:50012 FIN_WAIT2 tcp6 0 0 127.0.0.1:50020 127.0.0.1:8080 CLOSE_WAIT tcp6 578 0 ::1:8080 ::1:41240 CLOSE_WAIT tcp6 0 0 127.0.0.1:8080 127.0.0.1:50014 FIN_WAIT2 tcp6 0 0 127.0.0.1:8080 127.0.0.1:50018 FIN_WAIT2 tcp6 0 0 127.0.0.1:50022 127.0.0.1:8080 CLOSE_WAIT tcp6 0 0 127.0.0.1:50016 127.0.0.1:8080 CLOSE_WAIT tcp6 0 0 127.0.0.1:50006 127.0.0.1:8080 CLOSE_WAIT tcp6 0 0 127.0.0.1:8080 127.0.0.1:50022 FIN_WAIT2 tcp6 0 0 127.0.0.1:8080 127.0.0.1:50010 FIN_WAIT2 tcp6 0 0 127.0.0.1:50018 127.0.0.1:8080 CLOSE_WAIT tcp6 0 0 127.0.0.1:50010 127.0.0.1:8080 CLOSE_WAIT tcp6 0 0 127.0.0.1:50008 127.0.0.1:8080 CLOSE_WAIT tcp6 0 0 127.0.0.1:50012 127.0.0.1:8080 CLOSE_WAIT
you end up with a pile of socket file descriptors in FIN_WAIT2+CLOSE_WAIT
If instead you DO close the socket, you have only fd in TIME_WAIT
In the LEAKING case, you can observe the leaking fd:
sudo lsof -p 10779
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs Output information may be incomplete. COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME java 10779 centos cwd DIR 253,0 4096 52165832 /home/centos java 10779 centos rtd DIR 253,0 243 64 / java 10779 centos txt REG 253,0 7734 33728521 /home/centos/jdk1.8.0_141/bin/java java 10779 centos mem REG 253,0 106070960 56227 /usr/lib/locale/locale-archive java 10779 centos mem REG 253,0 115814 52241403 /home/centos/jdk1.8.0_141/jre/lib/amd64/libnet.so java 10779 centos mem REG 253,0 66216625 3748173 /home/centos/jdk1.8.0_141/jre/lib/rt.jar java 10779 centos mem REG 253,0 124327 52241426 /home/centos/jdk1.8.0_141/jre/lib/amd64/libzip.so java 10779 centos mem REG 253,0 62184 56246 /usr/lib64/libnss_files-2.17.so java 10779 centos mem REG 253,0 225914 52241483 /home/centos/jdk1.8.0_141/jre/lib/amd64/libjava.so java 10779 centos mem REG 253,0 66472 52241404 /home/centos/jdk1.8.0_141/jre/lib/amd64/libverify.so java 10779 centos mem REG 253,0 44448 42091 /usr/lib64/librt-2.17.so java 10779 centos mem REG 253,0 1139680 56236 /usr/lib64/libm-2.17.so java 10779 centos mem REG 253,0 17013932 19043494 /home/centos/jdk1.8.0_141/jre/lib/amd64/server/libjvm.so java 10779 centos mem REG 253,0 2127336 42088 /usr/lib64/libc-2.17.so java 10779 centos mem REG 253,0 19776 56234 /usr/lib64/libdl-2.17.so java 10779 centos mem REG 253,0 102990 52241266 /home/centos/jdk1.8.0_141/lib/amd64/jli/libjli.so java 10779 centos mem REG 253,0 144792 56254 /usr/lib64/libpthread-2.17.so java 10779 centos mem REG 253,0 164264 34688 /usr/lib64/ld-2.17.so java 10779 centos mem REG 253,0 32768 41856127 /tmp/hsperfdata_centos/10779 java 10779 centos 0u CHR 136,1 0t0 4 /dev/pts/1 java 10779 centos 1u CHR 136,1 0t0 4 /dev/pts/1 java 10779 centos 2u CHR 136,1 0t0 4 /dev/pts/1 java 10779 centos 3r REG 253,0 66216625 3748173 /home/centos/jdk1.8.0_141/jre/lib/rt.jar java 10779 centos 4u unix 0xffff8801eaabfc00 0t0 327936 socket java 10779 centos 5u IPv6 327938 0t0 TCP localhost:50132->localhost:webcache (CLOSE_WAIT) java 10779 centos 6u IPv6 327940 0t0 TCP localhost:50134->localhost:webcache (CLOSE_WAIT) java 10779 centos 7u IPv6 327942 0t0 TCP localhost:50136->localhost:webcache (CLOSE_WAIT) java 10779 centos 8u IPv6 327146 0t0 TCP localhost:50138->localhost:webcache (CLOSE_WAIT) java 10779 centos 9u IPv6 329030 0t0 TCP localhost:50140->localhost:webcache (CLOSE_WAIT) java 10779 centos 10u IPv6 329036 0t0 TCP localhost:50142->localhost:webcache (CLOSE_WAIT) java 10779 centos 11u IPv6 326450 0t0 TCP localhost:50146->localhost:webcache (CLOSE_WAIT) java 10779 centos 12u IPv6 328073 0t0 TCP localhost:50148->localhost:webcache (CLOSE_WAIT) java 10779 centos 13u IPv6 329065 0t0 TCP localhost:50150->localhost:webcache (ESTABLISHED)
A Java Flight Recorder measurement reveals the source of the leak:
You can view those sockets in /proc/10779/fd/ folder (use ls -ltra, they are links)
and display more info with
cat /proc/10779/net/sockstat
sockets: used 881 TCP: inuse 9 orphan 0 tw 9 alloc 94 mem 75 UDP: inuse 8 mem 1 UDPLITE: inuse 0 RAW: inuse 0 FRAG: inuse 0 memory 0
(see also https://www.cyberciti.biz/faq/linux-find-all-file-descriptors-used-by-a-process/ for procfs info )
Labels:
socket
Friday, September 13, 2019
Scrum
Excellent presentation:
Plan, Build, Test, Review, Deploy -> Potentially Deliverable Product
Several Incremental releases (Sprint)
Product Owner
Scrum Master
Team
Product Backlog (User Stories = feature sets) "as user I need something so that bla"
Sprint Backlog
Burndown Chart
3 Ceremonies:
- Sprint Planning (estimate user stories sizing)
- Daily Scrum (what is completed, what they work on, what is blocking them)
- Sprint backlog (things to do in current Spring)
Sprint Review
Sprint Retrospective
Story Format: WHO... WHAT...WHY...
Story Points, FIBONACCI sequence 1 2 3 5 8 13
Minimum Viable Product = something you can demonstrate to customer
Plan, Build, Test, Review, Deploy -> Potentially Deliverable Product
Several Incremental releases (Sprint)
Product Owner
Scrum Master
Team
Product Backlog (User Stories = feature sets) "as user I need something so that bla"
Sprint Backlog
Burndown Chart
3 Ceremonies:
- Sprint Planning (estimate user stories sizing)
- Daily Scrum (what is completed, what they work on, what is blocking them)
- Sprint backlog (things to do in current Spring)
Sprint Review
Sprint Retrospective
Story Format: WHO... WHAT...WHY...
Story Points, FIBONACCI sequence 1 2 3 5 8 13
Minimum Viable Product = something you can demonstrate to customer
Labels:
scrum
Monday, September 9, 2019
podman! skopio!
sudo yum install epel-release -y
sudo yum install dnf -y
sudo dnf install -y podman
segmentation fault!
alias docker=podman
funny presentation
https://github.com/containers/conmon
sudo yum install dnf -y
sudo dnf install -y podman
segmentation fault!
alias docker=podman
funny presentation
https://github.com/containers/conmon
Sunday, September 8, 2019
docker cheat sheets
https://github.com/wsargent/docker-cheat-sheet
About "docker inspect". you can inspect a container or an image.
docker inspect $containerid
-> there are 2 places with "Image", one showing the image's sha2, the other the image's name.
If you use the sha2 to identify the image, remember it's truncated to the leftmost 12 digits.
docker image inspect $imagename (or $imagesha2)
here you find the Cmd and Entrypoint`
You can assign a label to image in Dockerfile:
LABEL app=hello-world
and use it to filter:
docker images --filter "label=app=hello-world"
since version 18.6 you can use BuildKit https://docs.docker.com/engine/reference/builder/#label :
without BuildKit:
export DOCKER_BUILDKIT=1
with BuildKit:
CMD vs ENTRYPOINT
if you use
ENTRYPOINT echo "pippo "
docker build -t echopippo .
docker run echopippo
it will print "pippo"
if you use also CMD
ENTRYPOINT echo "pippo "
CMD " peppo"
it will still print only "pippo". To append arguments, you must use the JSON array format (i.e. square brackets)
ENTRYPOINT ["echo", "pippo "]
CMD [" peppo"]
this will print "pippo peppo".
If you provide an extra parameter from command line:
docker run echopippo pluto
then CMD is ignored, and the parameter from command line is used:
pippo pluto
Priceless Romin Irani turorial on :
docker volumes: https://rominirani.com/docker-tutorial-series-part-7-data-volumes-93073a1b5b72
Dockerfile: https://rominirani.com/docker-tutorial-series-writing-a-dockerfile-ce5746617cd
Other tutorials here https://github.com/botchagalupe/DockerDo
About "docker inspect". you can inspect a container or an image.
docker inspect $containerid
-> there are 2 places with "Image", one showing the image's sha2, the other the image's name.
If you use the sha2 to identify the image, remember it's truncated to the leftmost 12 digits.
docker image inspect $imagename (or $imagesha2)
here you find the Cmd and Entrypoint`
You can assign a label to image in Dockerfile:
LABEL app=hello-world
and use it to filter:
docker images --filter "label=app=hello-world"
since version 18.6 you can use BuildKit https://docs.docker.com/engine/reference/builder/#label :
without BuildKit:
docker build -t myhw . Sending build context to Docker daemon 9.728kB Step 1/3 : FROM busybox:latest ---> db8ee88ad75f Step 2/3 : CMD echo "Hello world date=" `date` ---> Using cache ---> 22bd2fd85b95 Step 3/3 : LABEL app=hello-world ---> Running in 1350a308f4eb Removing intermediate container 1350a308f4eb ---> 7a576b758d86 Successfully built 7a576b758d86 Successfully tagged myhw:latest
export DOCKER_BUILDKIT=1
with BuildKit:
docker build -t myhw . [+] Building 2.2s (5/5) FINISHED => [internal] load .dockerignore 1.5s => => transferring context: 2B 0.0s => [internal] load build definition from Dockerfile 1.2s => => transferring dockerfile: 181B 0.0s => [internal] load metadata for docker.io/library/busybox:latest 0.0s => [1/1] FROM docker.io/library/busybox:latest 0.0s => => resolve docker.io/library/busybox:latest 0.0s => exporting to image 0.0s => => exporting layers 0.0s => => writing image sha256:4ce77b76b309be143c25ad75add6cdf17c282491e2966e6ee32edc40f802b1f4 0.0s => => naming to docker.io/library/myhw
CMD vs ENTRYPOINT
if you use
ENTRYPOINT echo "pippo "
docker build -t echopippo .
docker run echopippo
it will print "pippo"
if you use also CMD
ENTRYPOINT echo "pippo "
CMD " peppo"
it will still print only "pippo". To append arguments, you must use the JSON array format (i.e. square brackets)
ENTRYPOINT ["echo", "pippo "]
CMD [" peppo"]
this will print "pippo peppo".
If you provide an extra parameter from command line:
docker run echopippo pluto
then CMD is ignored, and the parameter from command line is used:
pippo pluto
Priceless Romin Irani turorial on :
docker volumes: https://rominirani.com/docker-tutorial-series-part-7-data-volumes-93073a1b5b72
Dockerfile: https://rominirani.com/docker-tutorial-series-writing-a-dockerfile-ce5746617cd
Other tutorials here https://github.com/botchagalupe/DockerDo
Labels:
cmd,
docker,
entrypoint
Thursday, September 5, 2019
running ssh workflow on group of servers
#generate sample servers
for i in {1..110}; do printf "myserver%05d\n" $i; done > myservers.txt
#print servers in given group
cat grouped.txt | grep GROUP_005 | awk -F' ' '{print $2}'
#put this in steps.sh
#execute given step for GROUP
THESTEP=3; cat grouped.txt | grep GROUP_005 | awk -vstep="$THESTEP" -F' ' '{print $2,step}' | xargs ./steps.sh
for i in {1..110}; do printf "myserver%05d\n" $i; done > myservers.txt
#group servers by 20 count=0 group=0 for line in $(cat myservers.txt); do printf "GROUP_%03d %s\n" $group $line ((count=$count + 1)) if [ $count -eq 20 ]; then ((group=$group + 1)) count=0 fi done >> grouped.txt
#print servers in given group
cat grouped.txt | grep GROUP_005 | awk -F' ' '{print $2}'
#put this in steps.sh
myserver=$1 step=$2 case $step in 1) echo "hello, this is step 1 for server $myserver" ;; 2) echo "ciao, this is step 2 for server $myserver" ;; 3) echo "Gruezi, this is step 3 for server $myserver" ;; *) echo "ERROR invalid step $step" esac
#execute given step for GROUP
THESTEP=3; cat grouped.txt | grep GROUP_005 | awk -vstep="$THESTEP" -F' ' '{print $2,step}' | xargs ./steps.sh
Sunday, September 1, 2019
saltstack getting started
curl -L https://bootstrap.saltstack.com -o install_salt.sh
sudo sh install_salt.sh -M #install master and minion on same node
/etc/salt
/etc/salt/minion
master: localhost
id: myminion
sudo systemctl restart salt-minion
sudo systemctl restart salt-master
sudo salt-key
sudo salt-key -a myminion (then type Y)
sudo salt '*' test.ping
sudo salt-call sys.doc test.ping
sudo salt '*' cmd.run_all 'echo HELLO'
#list matcher
sudo salt -L 'myminion' test.ping
sudo salt '*' grains.item os
sudo salt '*' grains.items
sudo salt --grain 'os:CentOS' test.ping
#custom grains
cat /etc/salt/grains
#list all execution modules
sudo salt '*' sys.list_modules
sudo salt '*' pkg.install htop
sudo salt '*' state.sls apache
Ref:
Colton Meyes - Learning Saltstack (Packt Publishing), really direct and hands-on book.
(PS I had a quick look at "Mastering Saltstack" by Packt, it's waaay too abstract and blablaistic.
https://docs.saltstack.com/en/latest/
sudo sh install_salt.sh -M #install master and minion on same node
/etc/salt
/etc/salt/minion
master: localhost
id: myminion
sudo systemctl restart salt-minion
sudo systemctl restart salt-master
sudo salt-key
sudo salt-key -a myminion (then type Y)
sudo salt '*' test.ping
sudo salt-call sys.doc test.ping
sudo salt '*' cmd.run_all 'echo HELLO'
#list matcher
sudo salt -L 'myminion' test.ping
sudo salt '*' grains.item os
sudo salt '*' grains.items
sudo salt --grain 'os:CentOS' test.ping
#custom grains
cat /etc/salt/grains
#list all execution modules
sudo salt '*' sys.list_modules
sudo salt '*' pkg.install htop
sudo salt '*' state.sls apache
Ref:
Colton Meyes - Learning Saltstack (Packt Publishing), really direct and hands-on book.
(PS I had a quick look at "Mastering Saltstack" by Packt, it's waaay too abstract and blablaistic.
https://docs.saltstack.com/en/latest/
Labels:
saltstack
ports and pods
#start a pod with default parameters
kubectl run nginx --image=nginx --restart=Never
kubectl describe pod nginx
Node: node01/172.17.0.36
IP: 10.32.0.2
#we can reach nginx with the "IP" address
curl 10.32.0.2:80
but "curl 172.17.0.36:80" doesn't work!
kubectl describe nodes node01
InternalIP: 172.17.0.36
10.32.0.2 is the Pod's IP (=same IP for all containers running in that Pod):
kubectl exec -ti nginx bash
hostname -i
10.32.0.2
The Node IP cannot be used as such to reach the Pod/Container.
Setting the spec.container.ports.containerPort will not change neither the IP nor the POrt at which nginx is running: this parameter is purely "declarative" and is only useful when exposing the Pod/Deployment with a Service.
If you want to "expose" to an IP other than the Pod's IP:
kubectl expose pod nginx --port=8089 --target-port=80
kubectl describe service nginx
Type: ClusterIP
IP: 10.99.136.123
Port: 8089/TCP
TargetPort: 80/TCP
Endpoints: 10.32.0.2:80
NB this IP 10.99.136.123 is NOT the Node's IP nor the Pod's IP. It's a service-specific IP.
curl 10.99.136.123:8089
kubectl get service --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default nginx ClusterIP 10.99.136.123 8089/TCP 16m
Ref:
https://kubernetes.io/docs/concepts/cluster-administration/networking/
kubectl run nginx --image=nginx --restart=Never
kubectl describe pod nginx
Node: node01/172.17.0.36
IP: 10.32.0.2
#we can reach nginx with the "IP" address
curl 10.32.0.2:80
but "curl 172.17.0.36:80" doesn't work!
kubectl describe nodes node01
InternalIP: 172.17.0.36
10.32.0.2 is the Pod's IP (=same IP for all containers running in that Pod):
kubectl exec -ti nginx bash
hostname -i
10.32.0.2
The Node IP cannot be used as such to reach the Pod/Container.
Setting the spec.container.ports.containerPort will not change neither the IP nor the POrt at which nginx is running: this parameter is purely "declarative" and is only useful when exposing the Pod/Deployment with a Service.
If you want to "expose" to an IP other than the Pod's IP:
kubectl expose pod nginx --port=8089 --target-port=80
kubectl describe service nginx
Type: ClusterIP
IP: 10.99.136.123
Port:
TargetPort: 80/TCP
Endpoints: 10.32.0.2:80
NB this IP 10.99.136.123 is NOT the Node's IP nor the Pod's IP. It's a service-specific IP.
curl 10.99.136.123:8089
kubectl get service --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default nginx ClusterIP 10.99.136.123
Ref:
https://kubernetes.io/docs/concepts/cluster-administration/networking/
Labels:
kubernetes
Wednesday, August 28, 2019
some tutorials on Jenkins Pipelines
https://www.youtube.com/watch?v=-GsvomI4CCQ very pragmatic Simplilearn tutorial,
you can practice on https://www.katacoda.com/courses/jenkins/build-docker-images which gives you a dockerized Jenkins with console
Awesome power-jenkins tip-pack https://www.youtube.com/watch?v=6BIry0cepz4 :
- run Jenkins in Docker (jenkins/jenkins)
- select plugins you want to use (use plugins.txt to predefine a list of plugins)
- use agents, with swarm plugin to register, automate the agent provisioning and make them ephemeral
- don't use Maven jobs, because it's not reproduceable
- use pipelines, with Jenkinsfile (pipeline/stages/stage/steps)
- in pipelines, do all work on agents ("agent any")
- user input stage should be run on master, to avoid blocking executors ("agent none")
- limit number of stages
- don't change $env variable, use withEnv(["hello=world"]) instead
- parameters (?)
- use parallelism, for end to end tests , and performance tests and in separate nodes
- "scripted" is groovish for power user, declarative is ok for regular user
- pipelines should be small, they are orchestration tools... do the heavy stuff fin shell scripts which are easier to test
- in a pipeline everything is serializable so it can be resumed on failure (continuation-passing style)... but some classes are not serializable like groovy.text.StreamingTemplateEngine, then you have to wrap it
- BlueOcean plugin, with Editor for declarative pipelines
- use shared libraries in pipelines to reuse code, also reuse files
- use views to show only the jobs you are interested in
- BuildMonitor plugin to view jobs
- API in JSON or XML
- to-have plugins: BuildMonitor, Job Config History (to version freestyle jobs), Job DSL, Throttle Concurrent Builds, Timestamper, Version Number plugin & Build-name-setter
you can practice on https://www.katacoda.com/courses/jenkins/build-docker-images which gives you a dockerized Jenkins with console
Awesome power-jenkins tip-pack https://www.youtube.com/watch?v=6BIry0cepz4 :
- run Jenkins in Docker (jenkins/jenkins)
- select plugins you want to use (use plugins.txt to predefine a list of plugins)
- use agents, with swarm plugin to register, automate the agent provisioning and make them ephemeral
- don't use Maven jobs, because it's not reproduceable
- use pipelines, with Jenkinsfile (pipeline/stages/stage/steps)
- in pipelines, do all work on agents ("agent any")
- user input stage should be run on master, to avoid blocking executors ("agent none")
- limit number of stages
- don't change $env variable, use withEnv(["hello=world"]) instead
- parameters (?)
- use parallelism, for end to end tests , and performance tests and in separate nodes
- "scripted" is groovish for power user, declarative is ok for regular user
- pipelines should be small, they are orchestration tools... do the heavy stuff fin shell scripts which are easier to test
- in a pipeline everything is serializable so it can be resumed on failure (continuation-passing style)... but some classes are not serializable like groovy.text.StreamingTemplateEngine, then you have to wrap it
- BlueOcean plugin, with Editor for declarative pipelines
- use shared libraries in pipelines to reuse code, also reuse files
- use views to show only the jobs you are interested in
- BuildMonitor plugin to view jobs
- API in JSON or XML
- to-have plugins: BuildMonitor, Job Config History (to version freestyle jobs), Job DSL, Throttle Concurrent Builds, Timestamper, Version Number plugin & Build-name-setter
Labels:
jenkins
Kubernetes academy
https://kubernetes.academy/lessons/introduction-to-kubectl awesome productivity tips from John Harris
source < (kubectl completion bash) kubectx kubens kube-ps1 + kubeon #doc on a k8s object kubectl explain pod.spec.containers.ports #grep json kubectl get pod -n kube-system kube-scheduler-master -ojson | jq .metadata.labels #show custom columns kubectl get pod -n kube-system kube-scheduler-master -o custom-columns=NAME:.metadata.name,NS:.metadata.namespace #show labels kubectl get pod -n kube-system --show-labels #show column with value of given label kubectl get pod -n kube-system -L k8s-app #filter by label value kubectl get pod -n kube-system -l k8s-app=kube-dns -L k8s-app #sort by get pod -n kube-system -l k8s-app=kube-dns --sort-by='{.status.containerStatuses[*].restartCount}' #trace execution (very verbose) get pod -n kube-system -l k8s-app=kube-dns --sort-by='{.status.containerStatuses[*].restartCount}' -v10https://kubernetes.academy/lessons/introduction-to-ingress
Labels:
kubernetes
Monday, August 19, 2019
Wednesday, August 14, 2019
WebLogic, dramatic reduction of TLS sessions creation by rejectClientInitiatedRenegotiation
why the TLS Sessions are constantly invalidated, removed from cache and recreated, discovering that it's WLS SSLConfigUtils.configureClientInitSecureRenegotiation() who initiates this:
the code responsible is:
so the invalidate() is called only if !clientInitSecureRenegotiation , but it appears that clientInitSecureRenegotiation=isClientInitSecureRenegotiationAccepted is always FALSE
in JSSESocketFactory:
in JSSEFilterImpl:
so the only way to avoid session invalidation is by having IS_JDK_CLIENT_INIT_SECURE_RENEGOTIATION_PROPERTY_SET=false, that is by setting -Djdk.tls.rejectClientInitiatedRenegotiation=false (true or false doesn't seem to matter, as long as the variable is set)
Thanks to Carlo for the excellent analysis.
at sun.security.ssl.SSLSessionContextImpl.remove(SSLSessionContextImpl.java:132) at sun.security.ssl.SSLSessionImpl.invalidate(SSLSessionImpl.java:673) at weblogic.socket.utils.SSLConfigUtils.configureClientInitSecureRenegotiation(SSLConfigUtils.java:27) at weblogic.socket.JSSEFilterImpl.doHandshake(JSSEFilterImpl.java:135) at weblogic.socket.JSSEFilterImpl.isMessageComplete(JSSEFilterImpl.java:354) at weblogic.socket.SocketMuxer.readReadySocketOnce(SocketMuxer.java:976) at weblogic.socket.SocketMuxer.readReadySocket(SocketMuxer.java:917) at weblogic.socket.NIOSocketMuxer.process(NIOSocketMuxer.java:599) at weblogic.socket.NIOSocketMuxer.processSockets(NIOSocketMuxer.java:563) at weblogic.socket.SocketReaderRequest.run(SocketReaderRequest.java:30) at weblogic.socket.SocketReaderRequest.execute(SocketReaderRequest.java:43) at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:147) at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:119)
the code responsible is:
public static void configureClientInitSecureRenegotiation(SSLEngine sslEngine, boolean clientInitSecureRenegotiation) { if (!IS_JDK_CLIENT_INIT_SECURE_RENEGOTIATION_PROPERTY_SET) { if ((sslEngine != null) && (!sslEngine.getUseClientMode())) { if (!clientInitSecureRenegotiation) { sslEngine.getSession().invalidate(); } sslEngine.setEnableSessionCreation(clientInitSecureRenegotiation); if (isLoggable()) { SocketLogger.logDebug(clientInitSecureRenegotiation ? "Enabled" : "Disabled TLS client initiated secure renegotiation."); } } } else if (isLoggable()) { SocketLogger.logDebug("TLS client initiated secure renegotiation setting is configured with -Djdk.tls.rejectClientInitiatedRenegotiation"); } }
so the invalidate() is called only if !clientInitSecureRenegotiation , but it appears that clientInitSecureRenegotiation=isClientInitSecureRenegotiationAccepted is always FALSE
in JSSESocketFactory:
JSSEFilterImpl getJSSEFilterImpl(Socket connectedSocket, String host, int port) throws IOException { SSLEngine sslEngine = getSSLEngine(host, port); return new JSSEFilterImpl(connectedSocket, sslEngine, true); }
in JSSEFilterImpl:
public JSSEFilterImpl(Socket sock, SSLEngine engine, boolean clientMode) throws IOException { this(sock, engine, clientMode, false); // parameter 4 is isClientInitSecureRenegotiationAccepted, THIS IS ALWAYS FALSE, and clientMode is always TRUE } public JSSEFilterImpl(Socket sock, SSLEngine engine, boolean clientMode, boolean isClientInitSecureRenegotiationAccepted) // this constructor is ultimately invoked throws IOException {
so the only way to avoid session invalidation is by having IS_JDK_CLIENT_INIT_SECURE_RENEGOTIATION_PROPERTY_SET=false, that is by setting -Djdk.tls.rejectClientInitiatedRenegotiation=false (true or false doesn't seem to matter, as long as the variable is set)
Thanks to Carlo for the excellent analysis.
Labels:
renegotiation,
tls,
weblogic
Sunday, August 11, 2019
Audit the content of a series of folders against a file
the audit.txt contains the list of original files:
this script checks that in the folders
there are no extra files or folders:
Of course this scales very poorly... I would never dream of writing complex logic in bash, unless I was absolutely forced
/media/sf_shared/bashtests/dirtoaudit/ /media/sf_shared/bashtests/dirtoaudit/dir01 /media/sf_shared/bashtests/dirtoaudit/dir01/file01_01.txt /media/sf_shared/bashtests/dirtoaudit/dir01/file02_01.txt /media/sf_shared/bashtests/dirtoaudit/dir02 /media/sf_shared/bashtests/dirtoaudit/dir02/file01_02.txt /media/sf_shared/bashtests/dirtoaudit/dir02/file02_02.txt
this script checks that in the folders
/media/sf_shared/bashtests/dirtoaudit/ /media/sf_shared/bashtests/dirtoaudit/dir01 /media/sf_shared/bashtests/dirtoaudit/dir02
there are no extra files or folders:
Of course this scales very poorly... I would never dream of writing complex logic in bash, unless I was absolutely forced
Labels:
bash
Saturday, August 10, 2019
OpenShift CI/CD
https://www.youtube.com/watch?v=65BnTLcDAJI good video on CI/CD, part 1
https://www.youtube.com/watch?v=wSFyg6Etwx8 part 2
https://www.youtube.com/watch?v=kbbK0VEy2qM OpenShift 4 CI/CD
essential is to have installed in Jenkins the "OpenShift Jenkins Pipeline (DSL) Plugin" https://github.com/openshift/jenkins-client-plugin
https://www.youtube.com/watch?v=pMDiiW1UqLo Openshift Pipelines with Tekton https://cloud.google.com/tekton/ and here is the code https://github.com/openshift/pipelines-tutorial
https://www.youtube.com/watch?v=wSFyg6Etwx8 part 2
https://www.youtube.com/watch?v=kbbK0VEy2qM OpenShift 4 CI/CD
essential is to have installed in Jenkins the "OpenShift Jenkins Pipeline (DSL) Plugin" https://github.com/openshift/jenkins-client-plugin
https://www.youtube.com/watch?v=pMDiiW1UqLo Openshift Pipelines with Tekton https://cloud.google.com/tekton/ and here is the code https://github.com/openshift/pipelines-tutorial
rpm useful commands
list files installed by an INSTALLED rpm (for an UNINSTALLED rpm, add -p and provide full path to .rpm file):
rpm -ql nginx.x86_64
or also (if the rpm is not installed yet) repoquery --list nginx.x86_64
verify that rpm installed files have not been tampered
rpm -V nginx.x86_64
display the postinstall and postuninstall scripts
rpm -q --scripts nginx.x86_64
which rpm provides a given file:
rpm -q --whatprovides /usr/sbin/nginx
or also
rpm -qf /usr/sbin/nginx
for a REALLY verbose verification output:
rpm -Vvv nginx.x86_64
Ref:
http://ftp.rpm.org/max-rpm/s1-rpm-verify-what-to-verify.html
https://www.cyberciti.biz/howto/question/linux/linux-rpm-cheat-sheet.php fantastic all-in-one rpm cheat sheet
rpm -ql nginx.x86_64
or also (if the rpm is not installed yet) repoquery --list nginx.x86_64
verify that rpm installed files have not been tampered
rpm -V nginx.x86_64
display the postinstall and postuninstall scripts
rpm -q --scripts nginx.x86_64
which rpm provides a given file:
rpm -q --whatprovides /usr/sbin/nginx
or also
rpm -qf /usr/sbin/nginx
for a REALLY verbose verification output:
rpm -Vvv nginx.x86_64
Ref:
http://ftp.rpm.org/max-rpm/s1-rpm-verify-what-to-verify.html
https://www.cyberciti.biz/howto/question/linux/linux-rpm-cheat-sheet.php fantastic all-in-one rpm cheat sheet
Labels:
rpm
SAML and JWT
Excellent side-by-side comparison https://medium.com/@robert.broeckelmann/saml2-vs-jwt-a-comparison-254bafd98e6
Useful terminology:
https://en.wikipedia.org/wiki/Trusted_computing_base
Bearer Tokens
Holder of Key
Sender Vouches
Proof of Possession
IdP https://en.wikipedia.org/wiki/Identity_provider
Useful terminology:
https://en.wikipedia.org/wiki/Trusted_computing_base
Bearer Tokens
Holder of Key
Sender Vouches
Proof of Possession
IdP https://en.wikipedia.org/wiki/Identity_provider
Openshift RedHat plugin for Intellij
https://plugins.jetbrains.com/plugin/12030-openshift-connector-by-red-hat
Sample video on how to use it https://www.youtube.com/watch?v=kCESA7a5i3M
I keep getting the message "odo not found, do you want to download it?" , I click "yes" and nothing visible happens.... even if I have odo.exe on the PATH, I still get the error message....
https://github.com/openshift/odo
It doesn't seem very popular though.... very few downloads.... but I don 't want to use Eclipse with its JBoss Openshift Client, I hate Eclipse...
However, Intellij has its own Cloud support for Openshift https://www.jetbrains.com/help/idea/working-with-clouds.html
CTRL-ALT-S, Cloud, Openshift
see also https://www.jetbrains.com/help/idea/run-debug-configuration-openshift-deployment.html
Sample video on how to use it https://www.youtube.com/watch?v=kCESA7a5i3M
I keep getting the message "odo not found, do you want to download it?" , I click "yes" and nothing visible happens.... even if I have odo.exe on the PATH, I still get the error message....
https://github.com/openshift/odo
It doesn't seem very popular though.... very few downloads.... but I don 't want to use Eclipse with its JBoss Openshift Client, I hate Eclipse...
However, Intellij has its own Cloud support for Openshift https://www.jetbrains.com/help/idea/working-with-clouds.html
CTRL-ALT-S, Cloud, Openshift
see also https://www.jetbrains.com/help/idea/run-debug-configuration-openshift-deployment.html
Openshift 4, interesting readings
https://computingforgeeks.com/red-hat-openshift-4-new-features/
https://cloudowski.com/articles/10-differences-between-openshift-and-kubernetes/
https://cloudowski.com/articles/honest-review-of-openshift-4/
https://cloudowski.com/articles/why-managing-container-images-on-openshift-is-better-than-on-kubernetes/
https://computingforgeeks.com/setup-openshift-origin-local-cluster-on-centos/ ( not working for me.... ) see also https://github.com/openshift/origin/blob/v4.0.0-alpha.0/docs/cluster_up_down.md
I have deployed https://github.com/vernetto/sbhello with OpenShift Online,
using the Catalog option "Red Hat OpenJDK 8".
https://github.com/fabric8io-images/run-java-sh
This makes still a very good Developer introducton https://www.youtube.com/watch?v=cY7KIEajqx4 (a bit outdated) by Grant Shipley, really intense and focused.
https://www.youtube.com/watch?v=-xJIvBpvEeE amazing on Openshift infrastructure management
https://coreos.com/ignition/docs/latest/ what is ignition
https://www.terraform.io/intro/index.html what is terraform
https://cloudowski.com/articles/10-differences-between-openshift-and-kubernetes/
https://cloudowski.com/articles/honest-review-of-openshift-4/
https://cloudowski.com/articles/why-managing-container-images-on-openshift-is-better-than-on-kubernetes/
https://computingforgeeks.com/setup-openshift-origin-local-cluster-on-centos/ ( not working for me.... ) see also https://github.com/openshift/origin/blob/v4.0.0-alpha.0/docs/cluster_up_down.md
I have deployed https://github.com/vernetto/sbhello with OpenShift Online,
using the Catalog option "Red Hat OpenJDK 8".
.\oc.exe new-app openshift/java:8~https://github.com/vernetto/sbhello.git --name=sbhwpv3 .\oc.exe expose service sbhwpv3
https://github.com/fabric8io-images/run-java-sh
This makes still a very good Developer introducton https://www.youtube.com/watch?v=cY7KIEajqx4 (a bit outdated) by Grant Shipley, really intense and focused.
https://www.youtube.com/watch?v=-xJIvBpvEeE amazing on Openshift infrastructure management
https://coreos.com/ignition/docs/latest/ what is ignition
https://www.terraform.io/intro/index.html what is terraform
Labels:
openshift
Thursday, August 1, 2019
Linux. find broadcast address of a given network interface
It's grotesque how in 2019 we still have to rely on primitive, ambiguous tools like grep and awk to extract information from a linux command
This is what I could came up to "find broadcast address of a given network interface":
ip a s dev docker0 | grep "inet.*brd" | awk '{print $4}'
To subtract 1 from IP (see here ):
It's a mad world.
The broadcast address is always (?) the highest IP in the subnet range:
and the gateway will be (broadcast-1) = 172.25.1.126
To find out what the default gateway is:
cat /etc/sysconfig/network
initialization scripts in /etc/sysconfig/network-scripts/ifcfg-*
https://en.wikipedia.org/wiki/Broadcast_address
This is what I could came up to "find broadcast address of a given network interface":
ip a s dev docker0 | grep "inet.*brd" | awk '{print $4}'
To subtract 1 from IP (see here ):
cat checkip.ksh echo "Enter ip:" read IP_val awk -F"/" -vvalip="$IP_val" '{if($NF==valip){split($1, A,".");A[4]-=1;VAL=A[1] OFS A[2] OFS A[3] OFS A[4]}} END{print VAL}' OFS="." ip_list
It's a mad world.
The broadcast address is always (?) the highest IP in the subnet range:
Network: 172.25.1.64/26 Broadcast: 172.25.1.127 HostMin: 172.25.1.65 HostMax: 172.25.1.126 Hosts/Net: 62
and the gateway will be (broadcast-1) = 172.25.1.126
To find out what the default gateway is:
cat /etc/sysconfig/network
initialization scripts in /etc/sysconfig/network-scripts/ifcfg-*
https://en.wikipedia.org/wiki/Broadcast_address
Labels:
ip
Saturday, July 27, 2019
Serialization Filtering in Java10
in WebLogic:
Solution:
-Dweblogic.oif.serialFilterMode=disable
Also consider the other info here :
https://docs.oracle.com/javase/10/core/serialization-filtering1.htm#JSCOR-GUID-91735293-E38E-4A81-85DC-719AFEB36026
and the value of jdk.serialFilter Security property ($JAVA_HOME/lib/security/java.security )
Caused by: java.io.InvalidClassException: filter status: REJECTED at java.io.ObjectInputStream.filterCheck(ObjectInputStream.java:1258) at java.io.ObjectInputStream.readHandle(ObjectInputStream.java:1705) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1556) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2288) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2212) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2070) Caused by: java.lang.StackOverflowError at java.io.ObjectStreamClass$FieldReflector.getPrimFieldValues(ObjectStreamClass.java:2153) at java.io.ObjectStreamClass.getPrimFieldValues(ObjectStreamClass.java:1390) at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1532) at java.io.ObjectOutputStream.defaultWriteObject(ObjectOutputStream.java:440) at com.sun.org.apache.xerces.internal.dom.NodeImpl.writeObject(NodeImpl.java:2019) at sun.reflect.GeneratedMethodAccessor2392.invoke(Unknown Source)
Solution:
-Dweblogic.oif.serialFilterMode=disable
Also consider the other info here :
https://docs.oracle.com/javase/10/core/serialization-filtering1.htm#JSCOR-GUID-91735293-E38E-4A81-85DC-719AFEB36026
and the value of jdk.serialFilter Security property ($JAVA_HOME/lib/security/java.security )
Monday, July 22, 2019
No suitable client certificate could be found - continuing without client authentication
"No suitable client certificate could be found - continuing without client authentication"
1) are you specifying the password for the keystore?
2) are you providing a full certificate chain ? ( chain [0] , chain [1], chain [2] until the Root CA)
3) server specified issuers different from the one of the client certificate?
4) server specified ciphers not matching the one of the certificate?
5) (this is same as 2) whole certificate chain not in keystore (see https://stackoverflow.com/questions/9299133/why-doesnt-java-send-the-client-certificate-during-ssl-handshake )
Here the code https://github.com/frohoff/jdk8u-jdk/blob/master/src/share/classes/sun/security/ssl/ClientHandshaker.java , as you see it's totally pathetic and at the same message correspond completely different scenarios.
1) are you specifying the password for the keystore?
2) are you providing a full certificate chain ? ( chain [0] , chain [1], chain [2] until the Root CA)
3) server specified issuers different from the one of the client certificate?
4) server specified ciphers not matching the one of the certificate?
5) (this is same as 2) whole certificate chain not in keystore (see https://stackoverflow.com/questions/9299133/why-doesnt-java-send-the-client-certificate-during-ssl-handshake )
Here the code https://github.com/frohoff/jdk8u-jdk/blob/master/src/share/classes/sun/security/ssl/ClientHandshaker.java , as you see it's totally pathetic and at the same message correspond completely different scenarios.
Monday, July 15, 2019
Sunday, July 14, 2019
Saturday, July 13, 2019
Book: Spring Microservices in Action
This is a brilliantly written book.
https://github.com/carnellj/spmia-chapter1
Microservice Architecture
@SpringBootApplication
@RestController
@RequestMapping
@PathVariable
Flexible, Resilient, Scalable
IaaS, PaaS, SaaS, FaaS, CaaS
Client-side load balancing, Circuit breaker, Fallback, Bulkhead
Log correlation. Log aggregation. Microservice tracing
Spring Cloud:
Netflix Eureka (discovery), Zuul (routing), Ribbon (LN), Hystrix (Circuit Breaker), Sleuth/Uipkin (logging, tracing, aggregation), Oauth2/JWT
https://cloud.spring.io/spring-cloud-security/ Spring Security
https://jwt.io JavaScript Web Token
https://travis-ci.org
Hystrix proxies all RestTemplate. calls to add timeout. Ribbon also injects RestTemplate with all available service instances for LB and FO
to expose a bean there are 2 ways:
either one of @Component, @Service, @Repository
or @Configuration + @Bean
Apache Thrift, Apache Avro
12 factor apps: codebase in git, dependencies in maven, config in separate files, backing services (DB etc) cloud-ready,
immutable builds, stateless processes, port binding, horizontal scaling, disposable services, dev=prod, streamable logs (splunk, fluentd), scripted admin tasks.
Actuator health check.
Labels:
12factor,
books,
microservices,
Spring
Friday, July 12, 2019
dockerfile cmd and entrypoint
very confusing, poor design IMHO
Dockerfile:
FROM ubuntu
ENV myname "pierre"
ENTRYPOINT ["/bin/bash", "-c", "echo hello ${myname}"]
docker built -t hello01 .
docker run hello01
FROM ubuntu
ENTRYPOINT ["sleep"]
docker built -t hello02 .
docker run hello02
#this sleep for 5s
docker run hello02 5
#this gives error because parameter is missing
docker run hello02
FROM ubuntu
ENTRYPOINT ["sleep"]
CMD ["5"]
this version uses a default time of 5, if not specified in command line
"docker run hello02" will sleep for 5
"docker run hello02 10" will sleep for 10
Dockerfile:
FROM ubuntu
ENV myname "pierre"
ENTRYPOINT ["/bin/bash", "-c", "echo hello ${myname}"]
docker built -t hello01 .
docker run hello01
FROM ubuntu
ENTRYPOINT ["sleep"]
docker built -t hello02 .
docker run hello02
#this sleep for 5s
docker run hello02 5
#this gives error because parameter is missing
docker run hello02
FROM ubuntu
ENTRYPOINT ["sleep"]
CMD ["5"]
this version uses a default time of 5, if not specified in command line
"docker run hello02" will sleep for 5
"docker run hello02 10" will sleep for 10
Labels:
docker,
dockerfile
Thursday, July 11, 2019
Java JSSE SSL flags
-Djavax.net.debug=help
all turn on all debugging
ssl turn on ssl debugging
The following can be used with ssl:
record enable per-record tracing
handshake print each handshake message
keygen print key generation data
session print session activity
defaultctx print default SSL initialization
sslctx print SSLContext tracing
sessioncache print session cache tracing
keymanager print key manager tracing
trustmanager print trust manager tracing
pluggability print pluggability tracing
handshake debugging can be widened with:
data hex dump of each handshake message
verbose verbose handshake message printing
record debugging can be widened with:
plaintext hex dump of record plaintext
packet print raw SSL/TLS packets
Other non-so-famous properties:
https://www.oracle.com/technetwork/java/javase/overview/tlsreadme2-176330.html
-Dsun.security.ssl.allowUnsafeRenegotiation=true
-Dsun.security.ssl.allowLegacyHelloMessages=true
https://www.oracle.com/technetwork/java/javase/8u161-relnotes-4021379.html
-Djdk.tls.allowLegacyResumption=true
-Djdk.tls.allowLegacyMasterSecret=true
-Djdk.tls.traceHandshakeException=true
-Djdk.tls.useExtendedMasterSecret=true
-Djdk.tls.legacyAlgorithms=???
-Djdk.tls.ephemeralDHKeySize=???
https://docs.oracle.com/javase/10/security/java-secure-socket-extension-jsse-reference-guide.htm
jdk.tls.client.cipherSuites
jdk.tls.server.cipherSuites
all turn on all debugging
ssl turn on ssl debugging
The following can be used with ssl:
record enable per-record tracing
handshake print each handshake message
keygen print key generation data
session print session activity
defaultctx print default SSL initialization
sslctx print SSLContext tracing
sessioncache print session cache tracing
keymanager print key manager tracing
trustmanager print trust manager tracing
pluggability print pluggability tracing
handshake debugging can be widened with:
data hex dump of each handshake message
verbose verbose handshake message printing
record debugging can be widened with:
plaintext hex dump of record plaintext
packet print raw SSL/TLS packets
Other non-so-famous properties:
https://www.oracle.com/technetwork/java/javase/overview/tlsreadme2-176330.html
-Dsun.security.ssl.allowUnsafeRenegotiation=true
-Dsun.security.ssl.allowLegacyHelloMessages=true
https://www.oracle.com/technetwork/java/javase/8u161-relnotes-4021379.html
-Djdk.tls.allowLegacyResumption=true
-Djdk.tls.allowLegacyMasterSecret=true
-Djdk.tls.traceHandshakeException=true
-Djdk.tls.useExtendedMasterSecret=true
-Djdk.tls.legacyAlgorithms=???
-Djdk.tls.ephemeralDHKeySize=???
https://docs.oracle.com/javase/10/security/java-secure-socket-extension-jsse-reference-guide.htm
jdk.tls.client.cipherSuites
jdk.tls.server.cipherSuites
Wednesday, July 10, 2019
kubectl generators and restart option
https://kubernetes.io/docs/reference/kubectl/conventions/#generators
the only non-deprecated generatori is "run-pod/v1" :
kubectl run nginx --image=nginx --generator=run-pod/v1 --dry-run -o yaml
kubectl run nginx --image=nginx --dry-run -o yaml
#create nginx POD only
kubectl run nginx --image=nginx --port=80 --restart=Never --dry-run -o yaml
#create deployment nginx and pod
kubectl run nginx --image=nginx --port=80 --restart=Always --dry-run -o yaml
the only non-deprecated generatori is "run-pod/v1" :
kubectl run nginx --image=nginx --generator=run-pod/v1 --dry-run -o yaml
apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx name: nginx spec: containers: - image: nginx imagePullPolicy: IfNotPresent name: nginx resources: {} dnsPolicy: ClusterFirst restartPolicy: Always status: {}
kubectl run nginx --image=nginx --dry-run -o yaml
apiVersion: apps/v1beta1 kind: Deployment metadata: creationTimestamp: null labels: run: nginx name: nginx spec: replicas: 1 selector: matchLabels: run: nginx strategy: {} template: metadata: creationTimestamp: null labels: run: nginx spec: containers: - image: nginx name: nginx resources: {} status: {}
#create nginx POD only
kubectl run nginx --image=nginx --port=80 --restart=Never --dry-run -o yaml
apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx name: nginx spec: containers: - image: nginx imagePullPolicy: IfNotPresent name: nginx ports: - containerPort: 80 resources: {} dnsPolicy: ClusterFirst restartPolicy: Never status: {}
#create deployment nginx and pod
kubectl run nginx --image=nginx --port=80 --restart=Always --dry-run -o yaml
apiVersion: apps/v1beta1 kind: Deployment metadata: creationTimestamp: null labels: run: nginx name: nginx spec: replicas: 1 selector: matchLabels: run: nginx strategy: {} template: metadata: creationTimestamp: null labels: run: nginx spec: containers: - image: nginx name: nginx ports: - containerPort: 80 resources: {} status: {}
Labels:
kubectl
Friday, June 28, 2019
Helidon MicroProfiles
https://helidon.io/docs/latest/#/microprofile/01_introduction
Quickstart Helidon SE https://helidon.io/docs/latest/#/guides/02_quickstart-se
Quickstart Helidon MP https://helidon.io/docs/latest/#/guides/03_quickstart-mp
"MicroProfile is a collection of enterprise Java APIs that should feel familiar to Java EE developers. MicroProfile includes existing APIs such as JAX-RS, JSON-P and CDI, and adds additional APIs in areas such as configuration, metrics, fault tolerance and more."
More on MP https://helidon.io/docs/latest/#/microprofile/01_introduction
Quickstart Helidon SE https://helidon.io/docs/latest/#/guides/02_quickstart-se
Quickstart Helidon MP https://helidon.io/docs/latest/#/guides/03_quickstart-mp
"MicroProfile is a collection of enterprise Java APIs that should feel familiar to Java EE developers. MicroProfile includes existing APIs such as JAX-RS, JSON-P and CDI, and adds additional APIs in areas such as configuration, metrics, fault tolerance and more."
More on MP https://helidon.io/docs/latest/#/microprofile/01_introduction
Labels:
helidon
Saturday, June 22, 2019
maven-install-plugin copies files to your local .m2 repo
http://maven.apache.org/plugins/maven-install-plugin/usage.html
you can run this command from anywhere, no need for a pom.xml:
$ mvn install:install-file -Dfile=/c/pierre/downloads/apache-maven-3.5.3-bin.zip -DgroupId=pippo -DartifactId=pluto -Dpackaging=zip -Dversion=3.0
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------< org.apache.maven:standalone-pom >-------------------
[INFO] Building Maven Stub Project (No POM) 1
[INFO] --------------------------------[ pom ]---------------------------------
[INFO]
[INFO] --- maven-install-plugin:2.4:install-file (default-cli) @ standalone-pom ---
[INFO] Installing C:\pierre\downloads\apache-maven-3.5.3-bin.zip to c:\pierre\.m2\repository\pippo\pluto\3.0\pluto-3.0.zip
[INFO] Installing C:\Users\pierl\AppData\Local\Temp\mvninstall5440042488979291271.pom to c:\pierre\.m2\repository\pippo\pluto\3.0\pluto-3.0.pom
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 0.689 s
[INFO] Finished at: 2019-06-22T16:08:57+02:00
[INFO] ------------------------------------------------------------------------
and the generated pom.xml is
you can run this command from anywhere, no need for a pom.xml:
$ mvn install:install-file -Dfile=/c/pierre/downloads/apache-maven-3.5.3-bin.zip -DgroupId=pippo -DartifactId=pluto -Dpackaging=zip -Dversion=3.0
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------< org.apache.maven:standalone-pom >-------------------
[INFO] Building Maven Stub Project (No POM) 1
[INFO] --------------------------------[ pom ]---------------------------------
[INFO]
[INFO] --- maven-install-plugin:2.4:install-file (default-cli) @ standalone-pom ---
[INFO] Installing C:\pierre\downloads\apache-maven-3.5.3-bin.zip to c:\pierre\.m2\repository\pippo\pluto\3.0\pluto-3.0.zip
[INFO] Installing C:\Users\pierl\AppData\Local\Temp\mvninstall5440042488979291271.pom to c:\pierre\.m2\repository\pippo\pluto\3.0\pluto-3.0.pom
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 0.689 s
[INFO] Finished at: 2019-06-22T16:08:57+02:00
[INFO] ------------------------------------------------------------------------
and the generated pom.xml is
<?xml version="1.0" encoding="UTF-8"?> <project xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd" xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <modelVersion>4.0.0</modelVersion> <groupId>pippo</groupId> <artifactId>pluto</artifactId> <version>3.0</version> <packaging>zip</packaging> <description>POM was created from install:install-file</description> </project>
Labels:
maven
Wednesday, June 19, 2019
Spring Boot 2 HTTPS
see also https://www.baeldung.com/spring-boot-https-self-signed-certificate
https://better-coding.com/enabling-https-in-spring-boot-application/
generate the self-signed certificate:
keytool -genkeypair -alias baeldung -keyalg RSA -keysize 2048 -storetype PKCS12 -keystore baeldung.p12 -validity 3650
and store it in src/main/resources/keystore folder
in applications.properties:
https://better-coding.com/enabling-https-in-spring-boot-application/
generate the self-signed certificate:
keytool -genkeypair -alias baeldung -keyalg RSA -keysize 2048 -storetype PKCS12 -keystore baeldung.p12 -validity 3650
and store it in src/main/resources/keystore folder
in applications.properties:
server.port=8443 management.endpoints.web.exposure.include=* management.endpoint.shutdown.enabled=true # The format used for the keystore. It could be set to JKS in case it is a JKS file server.ssl.key-store-type=PKCS12 # The path to the keystore containing the certificate #server.ssl.key-store=classpath:keystore/baeldung.p12 server.ssl.key-store=src/main/resources/keystore/baeldung.p12 # The password used to generate the certificate server.ssl.key-store-password=password # The alias mapped to the certificate server.ssl.key-alias=baeldung server.ssl.key-password=password #trust store location trust.store=classpath:keystore/baeldung.p12 #trust store password trust.store.password=password
Labels:
https,
springboot
maven common plugins
For a very good overall tutorial on Maven, read this https://www.baeldung.com/maven
For a list of most plugins https://maven.apache.org/plugins/
https://www.baeldung.com/executable-jar-with-maven
maven-dependency-plugin
maven-jar-plugin
maven-assembly-plugin
maven-shade-plugin
com.jolira.onejar-maven-plugin
spring-boot-maven-plugin
tomcat7-maven-plugin
https://www.baeldung.com/maven-profiles
maven-help-plugin
https://www.baeldung.com/maven-dependency-latest-version
versions-maven-plugin
https://www.baeldung.com/maven-enforcer-plugin
maven-enforcer-plugin
https://www.mojohaus.org/build-helper-maven-plugin/usage.html
build-helper-maven-plugin
https://www.baeldung.com/maven-release-nexus
https://www.baeldung.com/install-local-jar-with-maven/
maven-install-plugin
maven-deploy-plugin
maven-release-plugin
nexus-staging-maven-plugin
https://www.baeldung.com/integration-testing-with-the-maven-cargo-plugin
maven-surefire-plugin
For a list of most plugins https://maven.apache.org/plugins/
https://www.baeldung.com/executable-jar-with-maven
maven-dependency-plugin
maven-jar-plugin
maven-assembly-plugin
maven-shade-plugin
com.jolira.onejar-maven-plugin
spring-boot-maven-plugin
tomcat7-maven-plugin
https://www.baeldung.com/maven-profiles
maven-help-plugin
https://www.baeldung.com/maven-dependency-latest-version
versions-maven-plugin
https://www.baeldung.com/maven-enforcer-plugin
maven-enforcer-plugin
https://www.mojohaus.org/build-helper-maven-plugin/usage.html
build-helper-maven-plugin
https://www.baeldung.com/maven-release-nexus
https://www.baeldung.com/install-local-jar-with-maven/
maven-install-plugin
maven-deploy-plugin
maven-release-plugin
nexus-staging-maven-plugin
https://www.baeldung.com/integration-testing-with-the-maven-cargo-plugin
maven-surefire-plugin
Labels:
maven
Monday, June 17, 2019
Spring certification
https://d1fto35gcfffzn.cloudfront.net/academy/Spring-Professional-Certification-Study-Guide.pdf
buy exam here https://academy.pivotal.io/confirm-course?courseid=EDU-1120-APP
Udemy courses
https://www.udemy.com/spring-framework-4-course-and-core-spring-certification/
https://www.udemy.com/course/spring-framework-5-beginner-to-guru/
Mock exam https://www.certification-questions.com/spring-dumps/professional.html
buy exam here https://academy.pivotal.io/confirm-course?courseid=EDU-1120-APP
Udemy courses
https://www.udemy.com/spring-framework-4-course-and-core-spring-certification/
https://www.udemy.com/course/spring-framework-5-beginner-to-guru/
Mock exam https://www.certification-questions.com/spring-dumps/professional.html
Labels:
Spring
Sunday, June 16, 2019
Spring bean lifecycles and BeanPostProcessor
import org.springframework.beans.factory.DisposableBean;
import org.springframework.beans.factory.InitializingBean;
import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct;
import javax.annotation.PreDestroy;
@Component
public class MyComponent implements InitializingBean, DisposableBean {
@Override
public void afterPropertiesSet() throws Exception {
System.out.println("afterPropertiesSet from InitializingBean");
}
@PostConstruct
public void onPostConstruct() {
System.out.println("onPostConstruct");
}
@PreDestroy
public void onPreDestroy() {
System.out.println("onPreDestroy");
}
@Override
public void destroy() throws Exception {
System.out.println("destroy from DisposableBean ");
}
}
the sequence is:
onPostConstruct
afterPropertiesSet from InitializingBean
onPreDestroy
destroy from DisposableBean
and you can intercept instantiatio of every bean with a BPP :
import org.springframework.beans.BeansException; import org.springframework.beans.factory.config.BeanPostProcessor; import org.springframework.context.annotation.Configuration; @Configuration public class CustomBeanPostProcessor implements BeanPostProcessor { public CustomBeanPostProcessor() { System.out.println("0. Spring calls constructor"); } @Override public Object postProcessBeforeInitialization(Object bean, String beanName) throws BeansException { System.out.println(bean.getClass() + " " + beanName); return bean; } @Override public Object postProcessAfterInitialization(Object bean, String beanName) throws BeansException { System.out.println(bean.getClass() + " " + beanName); return bean; } }
Labels:
Spring
fstab and UUID for device identification, docker and friends
https://help.ubuntu.com/community/Fstab
on my VirtualBox Centos7:
cat /etc/fstab
/dev/mapper/cl-root / xfs defaults 0 0
UUID=70139d85-209e-4997-9d06-af6659221021 /boot xfs defaults 0 0
/dev/mapper/cl-swap swap swap defaults 0 0
this is:
[Device] [Mount Point] [File System Type] [Options] [Dump] [Pass]
ls -l /dev/disk/by-uuid/
total 0
lrwxrwxrwx. 1 root root 9 Jun 14 17:41 2019-05-13-13-58-35-65 -> ../../sr0
lrwxrwxrwx. 1 root root 10 Jun 14 17:41 27882150-dbcf-44a5-8461-a7e16020ee6f -> ../../dm-1
lrwxrwxrwx. 1 root root 10 Jun 14 17:41 70139d85-209e-4997-9d06-af6659221021 -> ../../sda1
lrwxrwxrwx. 1 root root 10 Jun 14 17:41 96e9a0f9-2b77-4cfc-be6e-f4c982e57123 -> ../../dm-0
lrwxrwxrwx. 1 root root 10 Jun 15 19:08 fdad3ac1-1c70-4371-8f9e-72ab7f0167df -> ../../dm-3
blkid
/dev/sr0: UUID="2019-05-13-13-58-35-65" LABEL="VBox_GAs_6.0.8" TYPE="iso9660"
on the host VM:
mount | sort
on the docker centos7 container:
mount | sort
one can notice lot of differences in the VM and the container mounts, notably all the cgroup in docker are ro while in vm they are rw. Some mounts "/dev/mapper/cl-root on /etc/*" in docker
What is tmpfs? https://en.wikipedia.org/wiki/Tmpfs
What is xfs? https://en.wikipedia.org/wiki/XFS
What is FUSE (fusectl) ? https://en.wikipedia.org/wiki/Filesystem_in_Userspace#Examples
on my VirtualBox Centos7:
cat /etc/fstab
/dev/mapper/cl-root / xfs defaults 0 0
UUID=70139d85-209e-4997-9d06-af6659221021 /boot xfs defaults 0 0
/dev/mapper/cl-swap swap swap defaults 0 0
this is:
[Device] [Mount Point] [File System Type] [Options] [Dump] [Pass]
ls -l /dev/disk/by-uuid/
total 0
lrwxrwxrwx. 1 root root 9 Jun 14 17:41 2019-05-13-13-58-35-65 -> ../../sr0
lrwxrwxrwx. 1 root root 10 Jun 14 17:41 27882150-dbcf-44a5-8461-a7e16020ee6f -> ../../dm-1
lrwxrwxrwx. 1 root root 10 Jun 14 17:41 70139d85-209e-4997-9d06-af6659221021 -> ../../sda1
lrwxrwxrwx. 1 root root 10 Jun 14 17:41 96e9a0f9-2b77-4cfc-be6e-f4c982e57123 -> ../../dm-0
lrwxrwxrwx. 1 root root 10 Jun 15 19:08 fdad3ac1-1c70-4371-8f9e-72ab7f0167df -> ../../dm-3
blkid
/dev/sr0: UUID="2019-05-13-13-58-35-65" LABEL="VBox_GAs_6.0.8" TYPE="iso9660"
on the host VM:
mount | sort
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_prio,net_cls) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd) configfs on /sys/kernel/config type configfs (rw,relatime) debugfs on /sys/kernel/debug type debugfs (rw,relatime) /dev/mapper/cl-root on / type xfs (rw,relatime,seclabel,attr2,inode64,noquota) /dev/mapper/docker-253:0-34242903-3869b9e3d61005155d7ce7222280b67d4c034537b462d76016409d74c39c403b on /var/lib/docker/devicemapper/mnt/3869b9e3d61005155d7ce7222280b67d4c034537b462d76016409d74c39c403b type xfs (rw,relatime,seclabel,nouuid,attr2,inode64,logbsize=64k,sunit=128,swidth=128,noquota) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000) /dev/sda1 on /boot type xfs (rw,relatime,seclabel,attr2,inode64,noquota) /dev/sr0 on /run/media/centos/VBox_GAs_6.0.8 type iso9660 (ro,nosuid,nodev,relatime,uid=1000,gid=1000,iocharset=utf8,mode=0400,dmode=0500,uhelper=udisks2) devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=3989408k,nr_inodes=997352,mode=755) fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime) gvfsd-fuse on /run/user/1000/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel) mqueue on /dev/mqueue type mqueue (rw,relatime,seclabel) nfsd on /proc/fs/nfsd type nfsd (rw,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) proc on /run/docker/netns/9c46943f17e7 type proc (rw,nosuid,nodev,noexec,relatime) pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) selinuxfs on /sys/fs/selinux type selinuxfs (rw,relatime) shm on /var/lib/docker/containers/55284026cd2880cf08c45e66754fcf8011c9cf3227f1564022afad7807cbee27/mounts/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,seclabel,size=65536k) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=31,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=13854) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel) tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,mode=755) tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=801028k,mode=700,uid=1000,gid=1000) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,seclabel,mode=755)
on the docker centos7 container:
mount | sort
/dev/mapper/cl-root on /etc/hostname type xfs (rw,relatime,seclabel,attr2,inode64,noquota) /dev/mapper/cl-root on /etc/hosts type xfs (rw,relatime,seclabel,attr2,inode64,noquota) /dev/mapper/cl-root on /etc/resolv.conf type xfs (rw,relatime,seclabel,attr2,inode64,noquota) /dev/mapper/docker-253:0-34242903-3869b9e3d61005155d7ce7222280b67d4c034537b462d76016409d74c39c403b on / type xfs (rw,relatime,seclabel,nouuid,attr2,inode64,logbsize=64k,sunit=128,swidth=128,noquota) cgroup on /sys/fs/cgroup/blkio type cgroup (ro,nosuid,nodev,noexec,relatime,blkio) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (ro,nosuid,nodev,noexec,relatime,cpuacct,cpu) cgroup on /sys/fs/cgroup/cpuset type cgroup (ro,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/devices type cgroup (ro,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/freezer type cgroup (ro,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/hugetlb type cgroup (ro,nosuid,nodev,noexec,relatime,hugetlb) cgroup on /sys/fs/cgroup/memory type cgroup (ro,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (ro,nosuid,nodev,noexec,relatime,net_prio,net_cls) cgroup on /sys/fs/cgroup/perf_event type cgroup (ro,nosuid,nodev,noexec,relatime,perf_event) cgroup on /sys/fs/cgroup/pids type cgroup (ro,nosuid,nodev,noexec,relatime,pids) cgroup on /sys/fs/cgroup/systemd type cgroup (ro,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd) devpts on /dev/console type devpts (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=666) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=666) mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime,seclabel) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) proc on /proc/bus type proc (ro,relatime) proc on /proc/fs type proc (ro,relatime) proc on /proc/irq type proc (ro,relatime) proc on /proc/sys type proc (ro,relatime) proc on /proc/sysrq-trigger type proc (ro,relatime) shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,seclabel,size=65536k) sysfs on /sys type sysfs (ro,nosuid,nodev,noexec,relatime,seclabel) tmpfs on /dev type tmpfs (rw,nosuid,seclabel,size=65536k,mode=755) tmpfs on /proc/acpi type tmpfs (ro,relatime,seclabel) tmpfs on /proc/asound type tmpfs (ro,relatime,seclabel) tmpfs on /proc/kcore type tmpfs (rw,nosuid,seclabel,size=65536k,mode=755) tmpfs on /proc/keys type tmpfs (rw,nosuid,seclabel,size=65536k,mode=755) tmpfs on /proc/sched_debug type tmpfs (rw,nosuid,seclabel,size=65536k,mode=755) tmpfs on /proc/scsi type tmpfs (ro,relatime,seclabel) tmpfs on /proc/timer_list type tmpfs (rw,nosuid,seclabel,size=65536k,mode=755) tmpfs on /proc/timer_stats type tmpfs (rw,nosuid,seclabel,size=65536k,mode=755) tmpfs on /sys/firmware type tmpfs (ro,relatime,seclabel) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,relatime,seclabel,mode=755)
one can notice lot of differences in the VM and the container mounts, notably all the cgroup in docker are ro while in vm they are rw. Some mounts "/dev/mapper/cl-root on /etc/*" in docker
What is tmpfs? https://en.wikipedia.org/wiki/Tmpfs
What is xfs? https://en.wikipedia.org/wiki/XFS
What is FUSE (fusectl) ? https://en.wikipedia.org/wiki/Filesystem_in_Userspace#Examples
Friday, June 14, 2019
bash comparison and validation of string
trying to understand Bash syntax is really wasted time.... just copy/paste working examples
array=("pippo pluto topolino") value=pluto [[ " ${array[@]} " =~ " ${value} " ]] && echo "YES" || echo "NO" if [[ " ${array[@]} " =~ " ${value} " ]]; then echo trovato; fi pippo="ciao" [[ $pippo = "ciao" ]] && echo "1yes" [[ "ciao" = "ciao" ]] && echo "2yes" x="valid" if [ "$x" = "valid" ]; then echo "x has the value 'valid'" fi [[ "$x" = "valid" ]] && echo "x is valid" [ "$x" == "valid" ] && echo "x has the value 'valid'" [ "$x" == "valid" ] && echo "i am valid" || echo "i am invalid"
Labels:
bash
Tuesday, June 11, 2019
Java SSL server and client
https://www.baeldung.com/java-ssl-handshake-failures
this article is inspiring but it contains several errors/omissions.
The actually working code with detailed keytool commands is here https://github.com/vernetto/ssltests
Ultimate resource to learn SSL handshake is https://tls.ulfheim.net/
this article is inspiring but it contains several errors/omissions.
The actually working code with detailed keytool commands is here https://github.com/vernetto/ssltests
Ultimate resource to learn SSL handshake is https://tls.ulfheim.net/
Sunday, June 9, 2019
shell testing
I have never seen in my life a bash shell being covered by automated tests.
I have thought of using Java and Mockito and Junit5, but it's not very straightforward to run shells from Java (in 2019.... maybe in 2 years it will be normal).
But I think it would be an excellent idea.
This is an inspiring article https://www.leadingagile.com/2018/10/unit-testing-shell-scriptspart-one/
This is the shunit2 framework:
https://github.com/kward/shunit2/
Here the reference manual for shell scripting http://www.gnu.org/savannah-checkouts/gnu/bash/manual/bash.html but it's a bit too academic.
https://www.tldp.org/LDP/abs/html/index.html this one is richer of examples
PS shell scripting sucks
I have thought of using Java and Mockito and Junit5, but it's not very straightforward to run shells from Java (in 2019.... maybe in 2 years it will be normal).
But I think it would be an excellent idea.
This is an inspiring article https://www.leadingagile.com/2018/10/unit-testing-shell-scriptspart-one/
This is the shunit2 framework:
https://github.com/kward/shunit2/
Here the reference manual for shell scripting http://www.gnu.org/savannah-checkouts/gnu/bash/manual/bash.html but it's a bit too academic.
https://www.tldp.org/LDP/abs/html/index.html this one is richer of examples
PS shell scripting sucks
CRI-O
https://cri-o.io/
CRI-O = "Container Runtime Interface" "Open Container Initiative"
"a lightweight alternative to using Docker as the runtime for kubernetes"
https://www.quora.com/How-is-CRI-O-different-from-Docker-technology
"The CRI-O Container Engine is a implementation of a CRI (Kubernetes Container Runtime interface) that dedicated to Kubernetes. It implements only the features necessary to implement the CRI. Basically whatever Kubernetes needs. The goal to be as simple as possible and to never ever break Kubernetes. CRI-O is only for running containers in production. It runs OCI containers based on OCI images, which basically says it can run any container image sitting at Docker.io, Quay.IO, or any other container registry. It also launches OCI containers with runc.
Docker has a whole bunch of different technology, but I am guessing you are asking about the Docker daemon. Docker daemon is a general purpose container engine that implements API for launching OCI Container using the same runc that CRI-O uses. Docker daemon supports multiple different orchestrators including the Docker Client, Docker Swarm, Kubernetes, Mesosphere. It also supports everything from playing with containers to building containers.
The team behind CRI-O believes that building containers and developing and playing with containers should be done by different tools than the container engine that is used by Kubernetes. The CRI-O team has developed the Podman and Buildah container engines for developing/playing with containers and building container images.
Since these three tasks are done separately CRI-O can run with much tighter security than is required for building and developing containers."
CRI-O and kubeadm
https://katacoda.com/courses/kubernetes/getting-started-with-kubeadm-crio
What is a "pause" container and a "PID namespace sharing" ? https://www.ianlewis.org/en/almighty-pause-container
What is Weave ? https://www.weave.works/docs/cloud/latest/overview/
What is a Nodeport ? https://kubernetes.io/docs/concepts/services-networking/service/#nodeport
CRI-O = "Container Runtime Interface" "Open Container Initiative"
"a lightweight alternative to using Docker as the runtime for kubernetes"
https://www.quora.com/How-is-CRI-O-different-from-Docker-technology
"The CRI-O Container Engine is a implementation of a CRI (Kubernetes Container Runtime interface) that dedicated to Kubernetes. It implements only the features necessary to implement the CRI. Basically whatever Kubernetes needs. The goal to be as simple as possible and to never ever break Kubernetes. CRI-O is only for running containers in production. It runs OCI containers based on OCI images, which basically says it can run any container image sitting at Docker.io, Quay.IO, or any other container registry. It also launches OCI containers with runc.
Docker has a whole bunch of different technology, but I am guessing you are asking about the Docker daemon. Docker daemon is a general purpose container engine that implements API for launching OCI Container using the same runc that CRI-O uses. Docker daemon supports multiple different orchestrators including the Docker Client, Docker Swarm, Kubernetes, Mesosphere. It also supports everything from playing with containers to building containers.
The team behind CRI-O believes that building containers and developing and playing with containers should be done by different tools than the container engine that is used by Kubernetes. The CRI-O team has developed the Podman and Buildah container engines for developing/playing with containers and building container images.
Since these three tasks are done separately CRI-O can run with much tighter security than is required for building and developing containers."
CRI-O and kubeadm
https://katacoda.com/courses/kubernetes/getting-started-with-kubeadm-crio
What is a "pause" container and a "PID namespace sharing" ? https://www.ianlewis.org/en/almighty-pause-container
What is Weave ? https://www.weave.works/docs/cloud/latest/overview/
What is a Nodeport ? https://kubernetes.io/docs/concepts/services-networking/service/#nodeport
Labels:
cri-o,
docker,
kubernetes,
oci
Subscribe to:
Posts (Atom)