I have been through this very interesting presentation on Oracle Java Cloud PaaS solution , and I am looking forward to reading the book from Harshad Oak :
I am very persuaded that 10 years from now the mere idea of running your own on-premise data-center will be considered as madness.... well let's say 20 years from now...experience shows that industry takes time to make large paradigm shifts...
Wednesday, December 24, 2014
Friday, December 5, 2014
WorkManager: “Maximum Threads Constraint” vs “Capacity Constraint”
Someone was asking "if we set a limit of the number of concurrent requests using a work manager, what will happen to the other requests incoming when the limit has already been reached?"
On the workmanagr configuration, there is a "Capacity Constraint":
The total number of requests that can be queued or executing before WebLogic Server begins rejecting requests”
What is the difference between “Maximum Threads Constraint” and “Capacity Constraint” ?
https://blogs.oracle.com/imc/entry/weblogic_server_performance_and_tuning1
Max Threads Constraint: Guarantees the number of threads the server will allocate to requests.
Capacity Constraint: Causes the server to reject requests only when it has reached its capacity.
I think it’s safe to set Capacity Constraint to a quite high number (say 100) so as to allow the server to put clients on hold even when it has reached its max threads constraint.
It would be nice to test this….
On the workmanagr configuration, there is a "Capacity Constraint":
The total number of requests that can be queued or executing before WebLogic Server begins rejecting requests”
What is the difference between “Maximum Threads Constraint” and “Capacity Constraint” ?
https://blogs.oracle.com/imc/entry/weblogic_server_performance_and_tuning1
Max Threads Constraint: Guarantees the number of threads the server will allocate to requests.
Capacity Constraint: Causes the server to reject requests only when it has reached its capacity.
I think it’s safe to set Capacity Constraint to a quite high number (say 100) so as to allow the server to put clients on hold even when it has reached its max threads constraint.
It would be nice to test this….
Labels:
workmanager
Tuesday, December 2, 2014
Another nightmare on Remote Desktop Street
Remote Desktop is a stinky piece of code, you can see it by how little it helps you troubleshooting when it fails connecting. The error message is laughable in its generality.
First make sure it's actually listening:
netstat -an | find "3389"
where 3389 is the default port. Don't try to find it in HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\TerminalServer\WinStations\RDP-Tcp\PortNumber because for instance on my WIndows 7 Enteprise this folder doesn't even exist.
You can go through these troubleshooting steps
in my case, I had followed these instructions
MMC is Microsoft Management Console - a replacement for direct Registry Manipulation. It stinks, like all Windows UI.
I have done that, and it stopped working. Then I have done exactly the same manipulation, and checking if the flag "remote desktop enabled" was set, and it was not! Enabling it again fixes the issue. This MCC.exe utility is confusing to say the least... just don't play with it...
First make sure it's actually listening:
netstat -an | find "3389"
where 3389 is the default port. Don't try to find it in HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\TerminalServer\WinStations\RDP-Tcp\PortNumber because for instance on my WIndows 7 Enteprise this folder doesn't even exist.
You can go through these troubleshooting steps
in my case, I had followed these instructions
Log into the server and press Windows key + R then type MMC.exe . Click on File > Add/Remove Snap-in > click on Group Policy Object > Add> Finish > OK. Double click on Local Computer Policy > double click on Computer Configuration > Administrative Templates > Windows Components > Remote Desktop Services > Remote Desktop Session Host > Connections. Limit Number of Connections = 999999.
MMC is Microsoft Management Console - a replacement for direct Registry Manipulation. It stinks, like all Windows UI.
I have done that, and it stopped working. Then I have done exactly the same manipulation, and checking if the flag "remote desktop enabled" was set, and it was not! Enabling it again fixes the issue. This MCC.exe utility is confusing to say the least... just don't play with it...
Labels:
remotedesktop
Wednesday, November 26, 2014
jinfo causing segmentation fault on a 64 bit JVM
On a machine, I have jinfo resolving to:
/usr/bin/jinfo
When I run it against a 64bit JVM, I get:
and the file /opt/oracle/usr/hs_err_pid31050.log says:
The problem evidently is that THAT jinfo is a 32 bit version, and I should rather use /usr/lib/jvm/java-1.6.0-sun-1.6.0.81.x86_64/bin/jinfo
/usr/bin/jinfo
When I run it against a 64bit JVM, I get:
jinfo -sysprops ${pid} Attaching to process ID 26267, please wait... # # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0xaeec7791, pid=31050, tid=2934725520 # # JRE version: 6.0_33-b03 # Java VM: Java HotSpot(TM) Server VM (20.8-b03 mixed mode linux-x86 ) # Problematic frame: # C [libsaproc.so+0x1791] long double restrict+0x1d # # An error report file with more information is saved as: # /opt/oracle/usr/hs_err_pid31050.log # # If you would like to submit a bug report, please visit: # http://java.sun.com/webapps/bugreport/crash.jsp # The crash happened outside the Java Virtual Machine in native code. # See problematic frame for where to report the bug. # Aborted
and the file /opt/oracle/usr/hs_err_pid31050.log says:
# # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0xaeec7791, pid=31050, tid=2934725520 # # JRE version: 6.0_33-b03 # Java VM: Java HotSpot(TM) Server VM (20.8-b03 mixed mode linux-x86 ) # Problematic frame: # C [libsaproc.so+0x1791] long double restrict+0x1d # # If you would like to submit a bug report, please visit: # http://java.sun.com/webapps/bugreport/crash.jsp # The crash happened outside the Java Virtual Machine in native code. # See problematic frame for where to report the bug. #
The problem evidently is that THAT jinfo is a 32 bit version, and I should rather use /usr/lib/jvm/java-1.6.0-sun-1.6.0.81.x86_64/bin/jinfo
bugbuster, the alternative to Selenium
I have just seen a presentation of BugBuster, it's a JavaScript web functional test product with a recorder - a Chrome extension - which is totally integrated to your browser.
This means that it instruments the browser to monitor if the previous operation (like : form submission, AJAX xml send) is completed.... no need for "wait" statements.
I was really impressed by the cleanness of the interface, simplicity of use and the fact that it delivers you compact code that you can review and eventually customize.
The pricing seems very honest, so it's worth a try.
The company is a spinoff of the University of Lausanne... it's nice to see a University doing research and creating products...
This means that it instruments the browser to monitor if the previous operation (like : form submission, AJAX xml send) is completed.... no need for "wait" statements.
I was really impressed by the cleanness of the interface, simplicity of use and the fact that it delivers you compact code that you can review and eventually customize.
The pricing seems very honest, so it's worth a try.
The company is a spinoff of the University of Lausanne... it's nice to see a University doing research and creating products...
Labels:
webtest
Saturday, November 22, 2014
Jewish Auschwitz Survivor Ernst Lobethal (aka Ernest Lobet)
I have a keen interest about the history of the Holocaust (I believe that these days we have several Holocausts going on, the Palestinian just to name one...)... so with some paid help I have written down the interview of one of "my favorite" survivors, and written a draft article for the Wikipedia... for reasons that totallz escape me that article was not approved, so here it goes for all those who care....dedicated to all the migrants fleeing their countries who have been destabilized by NATO terrorists undercover of pseudo-islamic organizations.
Ernst Lobethal (Wrocław 6 February 1923 – New York 1 October 2012), aka Ernest Lobet, was a German Jew,. He was a survivor of the Auschwitz-Birkenau concentration camp. The book The Man who Broke into Auschwitz talks extensively about his internment in Auschwitz.
In January 1943 he was deported to Auschwitz in one of the last deportation trains from Breslau.
He casually met a British POW, "Ginger" (Denis Avey) who wrote a letter to Ernst's sister in Birmingham. Two months later he received 10 packs of cigarettes from his sister, and this small patrimony allowed him to improve his standard of life in the camp.
Ernst put himself at the head of the column, knowing that the first to arrive would be the better accommodated. He was then put in a cattle car, without roof, 80 people in a wagon, and moved to Mauthausen, with no food and drinking melted snow along the trip. The Mauthausen camp was full so they where shipped to another camp in Czechoslovakia. During this trip he lost his eyesight - probably for malnutrition. While crossing Czechoslovakia, the local population was tracking the passage of the train and were throwing food to the inmates from the bridges.
In his own words:
He was given a pass to reach Paris, where he earned his life for some months with informal street commerce of G.I. cigarettes. Then he got a job as an taxi stand boy for the American Red Cross.
Eventually he found a sponsor to immigrate to USA where he landed in NY on Labor Day 1 Sept 1947, on the ship Marine Flasher. He immediately had his tattooed lager number removed and he changed his name into Ernest Lobet. He was soon after drafted and sent to fight in Korean War. After meeting his former schoolmate Henry Kamm he decided to go back to college and graduated in Engineering, then in Law. He married and had 3 children.
Ernst Lobethal (Wrocław 6 February 1923 – New York 1 October 2012), aka Ernest Lobet, was a German Jew,. He was a survivor of the Auschwitz-Birkenau concentration camp. The book The Man who Broke into Auschwitz talks extensively about his internment in Auschwitz.
Contents
Biography
His father, Rudolf Lobethal, an affluent manager, fled to South Africa with most of the family patrimony. His mother, Freda Silberstine, died shortly after in 1932. He was interned into an orphanage, then entrusted to a foster family and later he lived with his disabled grandmother, that he supported working in a Exhaust Tyre Recycling Factory in 1941-42.In January 1943 he was deported to Auschwitz in one of the last deportation trains from Breslau.
Experiences in Auschwitz
In Buna-Monowitz he was assigned to a construction commando, but he bought the benevolence of a kapo with the 100 Marks note that he kept hidden in his belt, and managed to get a better job computing statistics in a civilian office.He casually met a British POW, "Ginger" (Denis Avey) who wrote a letter to Ernst's sister in Birmingham. Two months later he received 10 packs of cigarettes from his sister, and this small patrimony allowed him to improve his standard of life in the camp.
The Death Marches
On 18 January 1945, while the Russians were approaching, he was evacuated in a "death march" to Gleiwitz, some 65 Km away, with 10 thousand people from Buna and 30 thousand from Auschwitz III. Most of the prisoners died of cold and exhaustion along the march. The march lasted 24 hours without any stop. It is estimated that out of the 40-45 thousand who left Auschwitz, only 25 thousand survived the march.Ernst put himself at the head of the column, knowing that the first to arrive would be the better accommodated. He was then put in a cattle car, without roof, 80 people in a wagon, and moved to Mauthausen, with no food and drinking melted snow along the trip. The Mauthausen camp was full so they where shipped to another camp in Czechoslovakia. During this trip he lost his eyesight - probably for malnutrition. While crossing Czechoslovakia, the local population was tracking the passage of the train and were throwing food to the inmates from the bridges.
In his own words:
"as we were passing these overpasses in Czechoslovakia the passing of the train somehow was being telegraphed from place to place within Czechoslovakia and obviously if you were standing on an overpass the sight that you must have seen must have been something to behold for I don't know how many cattle cars there were but they were all open and inside you had these zebra clad skeletons huddled together listless like cows being slaughtered, being led to the slaughter house and obviously some of these Czechs had come with bread and they threw that from the overpass into the cars. Through our entire trip through Austria where of course you also had lots of overpasses and lots of civilians see what was passing underneath and through our entire trip through Germany after we left Czechoslovakia we would never again receive as much as a slice of bread from any of these Austrians or Germans "
Mittelbau Dora and Mauthausen
At the end of the evacuation he was assigned to work as a bricklayer aid in the tunnels of Mittelbau Dora, the V2 rocket factory built into a mountain. In Mittelbau Dora the work and living conditions were appalling, so he pretended to be a locksmith and was transferred to Mauthausen in February 1945. Out of 6000 inmates only 1500 were alive 6 weeks later, because of the extreme malnutrition.Liberation and emigration to USA
On 11 April 1945 he survived an Allied bombing of the barracks with incendiary bombs, and manages to escape from the destroyed camp, and joins the USA troops who accommodate him in a hotel in Sondershausen.He was given a pass to reach Paris, where he earned his life for some months with informal street commerce of G.I. cigarettes. Then he got a job as an taxi stand boy for the American Red Cross.
Eventually he found a sponsor to immigrate to USA where he landed in NY on Labor Day 1 Sept 1947, on the ship Marine Flasher. He immediately had his tattooed lager number removed and he changed his name into Ernest Lobet. He was soon after drafted and sent to fight in Korean War. After meeting his former schoolmate Henry Kamm he decided to go back to college and graduated in Engineering, then in Law. He married and had 3 children.
References
See also
External links
Category:Auschwitz concentration camp survivors
Labels:
books
Monday, November 17, 2014
Monitor cpu usage per thread in java...
There is a thread on SO , from which I discovered the TopThreads plugin for JConsole:
http://arnhem.luminis.eu/new_version_topthreads_jconsole_plugin/
jconsole -J-Djava.class.path=C:\Oracle\MIDDLE~1\JDK160~1\lib\jconsole.jar;C:\Oracle\MIDDLE~1\JDK160~1\lib\tools.jar;C:\Oracle\Middleware\wlserver_10.3\server\lib\wlfullclient.jar -J-Djmx.remote.protocol.provider.pkgs=weblogic.management.remote -pluginpath "c:\pierre\downloads\topthreads-1.1.jar" -debug
in the jconsole output you should see "Plugin class net.luminis.jmx.topthreads.PluginAdapter loaded."
and here you are: AWESOME!!!
Surely you can script it with ThreadJMX MBean, to create a monitoring tool, but this JConsole UI is so nice...
Also great is this trick: run TOP, then SHIFT-H, take the decimal value of the threads who eat more CPU, convert to HEX and match it to the nid in a jstack threadDump
http://code.nomad-labs.com/2010/11/18/identifying-which-java-thread-is-consuming-most-cpu/
http://arnhem.luminis.eu/new_version_topthreads_jconsole_plugin/
jconsole -J-Djava.class.path=C:\Oracle\MIDDLE~1\JDK160~1\lib\jconsole.jar;C:\Oracle\MIDDLE~1\JDK160~1\lib\tools.jar;C:\Oracle\Middleware\wlserver_10.3\server\lib\wlfullclient.jar -J-Djmx.remote.protocol.provider.pkgs=weblogic.management.remote -pluginpath "c:\pierre\downloads\topthreads-1.1.jar" -debug
in the jconsole output you should see "Plugin class net.luminis.jmx.topthreads.PluginAdapter loaded."
and here you are: AWESOME!!!
Surely you can script it with ThreadJMX MBean, to create a monitoring tool, but this JConsole UI is so nice...
Also great is this trick: run TOP, then SHIFT-H, take the decimal value of the threads who eat more CPU, convert to HEX and match it to the nid in a jstack threadDump
http://code.nomad-labs.com/2010/11/18/identifying-which-java-thread-is-consuming-most-cpu/
JKS: protecting your Private Key with a password
Strangely, Puppet java_ks module doesn't cater for protecting the Private Key with a password. This feature seems to be available only through the Oracle proprietary ImportPrivateKey tool , through the "-keyfilepass" option.
Another tool providing the same functionality is ExtKeyTool, available here .
If you don't need a scripting interface, but are happy with a UI, you can use Keystore Explorer, it' really cool. It allows you very simply to export the Private Key in PKCS#8, PVK or OpenSSL formats. All these formats can be encrypted and protected with a password, to avoid that the PK is stolen. Not necessarily your .key file is protected. Incidentally if your .key file begins with "-----BEGIN RSA PRIVATE KEY-----", it's most likely a OpenSSL file.
Traditionally in the WebLogic world people use the utils.ImportPrivateKey utility; as you see, it supports all: a password-protected key file (-keyfilepass), a password-protected JKS store (-storepass), a password-protected key entry in the JKS Store (-keypass):
In fact, you MUST protect your key with a password in the JKS file, but the .key file needs not to be protected (-keyfilepass can be omitted). The -keypass parameter is the same you provide for "Private Key Passphrase" in the "SSL" configuration of the WebLogic Server. The -storepass corresponds to the "Custom Identity Keystore Passphrase" in the "Keystore" tab of the WLConsole.
Another workaroundish way of doing it is using keytool and going through a pkcs12 keystore:
Just use the -destkeypass option, and -srcstoretype PKCS12 (see this SO post).
Another tool providing the same functionality is ExtKeyTool, available here .
If you don't need a scripting interface, but are happy with a UI, you can use Keystore Explorer, it' really cool. It allows you very simply to export the Private Key in PKCS#8, PVK or OpenSSL formats. All these formats can be encrypted and protected with a password, to avoid that the PK is stolen. Not necessarily your .key file is protected. Incidentally if your .key file begins with "-----BEGIN RSA PRIVATE KEY-----", it's most likely a OpenSSL file.
Traditionally in the WebLogic world people use the utils.ImportPrivateKey utility; as you see, it supports all: a password-protected key file (-keyfilepass), a password-protected JKS store (-storepass), a password-protected key entry in the JKS Store (-keypass):
cd $DOMAIN_HOME/bin . ./setDomainEnv.sh java utils.ImportPrivateKey Usage: java utils.ImportPrivateKey -certfile-keyfile [-keyfilepass ] -keystore -storepass [-storetype ] -alias [-keypass ] [-help] Where: -certfile, -keyfile, -keyfilepass certificate and private key files, and the private key password -keystore, -storepass, -storetype keystore file name, password, and type. The default type is JKS. -alias -keypass alias and password of the keystore key entry where the private key and the public certificate will be imported. When the key entry password is not specified, the private key password will be used instead, or when it is not specified either, the keystore password.
In fact, you MUST protect your key with a password in the JKS file, but the .key file needs not to be protected (-keyfilepass can be omitted). The -keypass parameter is the same you provide for "Private Key Passphrase" in the "SSL" configuration of the WebLogic Server. The -storepass corresponds to the "Custom Identity Keystore Passphrase" in the "Keystore" tab of the WLConsole.
Another workaroundish way of doing it is using keytool and going through a pkcs12 keystore:
-importkeystore [-v] [-srckeystore] [-destkeystore ] [-srcstoretype ] [-deststoretype ] [-srcstorepass ] [-deststorepass ] [-srcprotected] [-destprotected] [-srcprovidername ] [-destprovidername ] [-srcalias [-destalias ] [-srckeypass ] [-destkeypass ]] [-noprompt] [-providerclass [-providerarg ]] ... [-providerpath ]
Just use the -destkeypass option, and -srcstoretype PKCS12 (see this SO post).
Labels:
keytool
Sunday, November 16, 2014
Linux Certification LPIC
Really awesome resource, much better than any other training material seen so far:
http://lpic2.unix.nl/index.html
I am considering taking Linux administration certification - there is an amazing amount of tricks I am totally unaware of, much to my shame
More here https://www.lpi.org/linux-certifications/entry-level-credential/linux-essentials
I am considering taking Linux administration certification - there is an amazing amount of tricks I am totally unaware of, much to my shame
More here https://www.lpi.org/linux-certifications/entry-level-credential/linux-essentials
Labels:
certifications,
linux
Monday, November 10, 2014
Install puppet modules recursively from a Puppetfile
If you have worked with r10k or puppet-librarian (or even worse, if you have managed Puppet modules manually) you must have realized how pathetic these tools are.
Luckily now Bruno Bieth comes to our rescue with a brilliant Swiss (actually: French!) knife:
https://github.com/backuity/puppet-module-installer
The tool is VERY simple to install (no gem, no ruby, no crap) and it gives you invaluable dependency analysis, consistency checks and graphs that will unveil any dodgy situation in your Puppet module dependencies.
Luckily now Bruno Bieth comes to our rescue with a brilliant Swiss (actually: French!) knife:
https://github.com/backuity/puppet-module-installer
The tool is VERY simple to install (no gem, no ruby, no crap) and it gives you invaluable dependency analysis, consistency checks and graphs that will unveil any dodgy situation in your Puppet module dependencies.
Labels:
puppet
Friday, November 7, 2014
SBConsoleAccessException
I had to start admin on a different machine, and I was getting
com.bea.alsb.console.common.base.SBConsoleAccessException: The current login role is not authorized to use the console action: "/sbSubModules"
The only way I managed to make it work is by replacing DOMAIN_HOME/servers/osbpp1ms2/data/ldap/ with the content of another server:
com.bea.alsb.console.common.base.SBConsoleAccessException: The current login role is not authorized to use the console action: "/sbSubModules"
The only way I managed to make it work is by replacing DOMAIN_HOME/servers/osbpp1ms2/data/ldap/ with the content of another server:
cd opt/oracle/domains/osbpp1do/servers/osbpp1as/data/ cp -R ldap/ ldapOLD/ cp -R /opt/oracle/domains/osbpp1do/servers/osbpp1ms2/data/ldap/* ldap/
Thursday, November 6, 2014
Java: what is your default timezone?
cat MyTimezone.java
javac MyTimezone.java
java MyTimezone
on one machine I have:
The second machine has timezone "Europe/Vaduz", and the Oracle DB doesn't recognize it:
This explains why your JVM might get in trouble with the Oracle DB and you have to set explicitly the timezone yourself...
See also http://www.javamonamour.org/2014/11/ora-01882-timezone-region-not-found.html
Funnily:
these 2 files are identical.
How does Java determine its default timezone from the Linux machine it's running on?
on RHEL, use redhat-config-date system-config-time timeconfig tzselect
import java.util.TimeZone; public class MyTimezone { public static void main(String[] args) throws Exception { TimeZone timeZone = TimeZone.getDefault(); System.out.println(timeZone); } }
javac MyTimezone.java
java MyTimezone
on one machine I have:
sun.util.calendar.ZoneInfo[id="Europe/Zurich",offset=3600000,dstSavings=3600000,useDaylight=true, transitions=119,lastRule=java.util.SimpleTimeZone[id=Europe/Zurich,offset=3600000,dstSavings=3600000, useDaylight=true,startYear=0,startMode=2,startMonth=2,startDay=-1,startDayOfWeek=1,startTime=3600000,startTimeMode=2,endMode=2, endMonth=9,endDay=-1,endDayOfWeek=1,endTime=3600000,endTimeMode=2]]on another machine I have:
sun.util.calendar.ZoneInfo[id="Europe/Vaduz",offset=3600000,dstSavings=3600000,useDaylight=true, transitions=119,lastRule=java.util.SimpleTimeZone[id=Europe/Vaduz,offset=3600000,dstSavings=3600000, useDaylight=true,startYear=0,startMode=2,startMonth=2,startDay=-1,startDayOfWeek=1,startTime=3600000,startTimeMode=2,endMode=2, endMonth=9,endDay=-1,endDayOfWeek=1,endTime=3600000,endTimeMode=2]]
The second machine has timezone "Europe/Vaduz", and the Oracle DB doesn't recognize it:
SELECT * FROM V$TIMEZONE_NAMES where TZNAME = 'Europe/Vaduz';nothing!
This explains why your JVM might get in trouble with the Oracle DB and you have to set explicitly the timezone yourself...
See also http://www.javamonamour.org/2014/11/ora-01882-timezone-region-not-found.html
Funnily:
diff /usr/share/zoneinfo/Europe/Vaduz /etc/localtime diff /usr/share/zoneinfo/Europe/Zurich /etc/localtime
these 2 files are identical.
How does Java determine its default timezone from the Linux machine it's running on?
on RHEL, use redhat-config-date system-config-time timeconfig tzselect
Wednesday, November 5, 2014
ORA-01882: timezone region not found
After applying package update on Linux RHEL, we get on all Datasources:
ORA-01882: timezone region not found
First let's determine on which timezone our server is:
grep "ZONE=" /etc/sysconfig/clock
ZONE="Europe/Zurich"
let's open a connection to our DB and check:
SELECT * FROM V$TIMEZONE_NAMES where TZNAME = 'Europe/Zurich';
Europe/Zurich LMT
Europe/Zurich BMT
Europe/Zurich CET
Europe/Zurich CEST
I think CET is pretty decent...
I can fix the issue in 2 ways:
either I do "export TZ=CET" in my "soa" Linux user profile (.bashrc)
or I set the property in DOMAIN_HOME/bin/setDomainEnv.sh
-Duser.timezone=CET
Either will work.
Small test harness:
cat DBPing.java
javac DBPing.java
java -cp /opt/oracle/fmw11_1_1_5/wlserver_10.3/server/lib/ojdbc6.jar:. DBPing acme ********** jdbc:oracle:thin:@mydb.acme.com:1551/s_gr00
second edition:
by setting the timezone we fix the error...
Setting "oracle.jdbc.timezoneAsRegion=false" in the WebLogic Datasource properties also does the job. This probably is the least impact solution, as it affects only the DB...
ORA-01882: timezone region not found
First let's determine on which timezone our server is:
grep "ZONE=" /etc/sysconfig/clock
ZONE="Europe/Zurich"
let's open a connection to our DB and check:
SELECT * FROM V$TIMEZONE_NAMES where TZNAME = 'Europe/Zurich';
Europe/Zurich LMT
Europe/Zurich BMT
Europe/Zurich CET
Europe/Zurich CEST
I think CET is pretty decent...
I can fix the issue in 2 ways:
either I do "export TZ=CET" in my "soa" Linux user profile (.bashrc)
or I set the property in DOMAIN_HOME/bin/setDomainEnv.sh
-Duser.timezone=CET
Either will work.
Small test harness:
cat DBPing.java
import java.sql.DriverManager; import java.sql.SQLException; public class DBPing { public static void main(String[] args) throws ClassNotFoundException, SQLException { Class.forName("oracle.jdbc.driver.OracleDriver"); System.out.println("length " + args.length); String user = args[0]; String password = args[1]; String url = args[2]; String now = new java.util.Date().toString(); System.out.println(now + " user= " + user + " password=" + password + " url=" + url); java.sql.Connection conn = DriverManager.getConnection(url, user, password); System.out.println("ok"); } }
javac DBPing.java
java -cp /opt/oracle/fmw11_1_1_5/wlserver_10.3/server/lib/ojdbc6.jar:. DBPing acme ********** jdbc:oracle:thin:@mydb.acme.com:1551/s_gr00
length 3 Wed Nov 05 15:41:24 CET 2014 user= acme password=bla url=jdbc:oracle:thin:@mydb.acme.com:1551/s_gr00 Exception in thread "main" java.sql.SQLException: ORA-00604: error occurred at recursive SQL level 1 ORA-01882: timezone region not found at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:440) at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:389) at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:382) at oracle.jdbc.driver.T4CTTIfun.processError(T4CTTIfun.java:573) at oracle.jdbc.driver.T4CTTIoauthenticate.processError(T4CTTIoauthenticate.java:431) at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:445) at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:191) at oracle.jdbc.driver.T4CTTIoauthenticate.doOAUTH(T4CTTIoauthenticate.java:366) at oracle.jdbc.driver.T4CTTIoauthenticate.doOAUTH(T4CTTIoauthenticate.java:752) at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:366) at oracle.jdbc.driver.PhysicalConnection.(PhysicalConnection.java:538) at oracle.jdbc.driver.T4CConnection. (T4CConnection.java:228) at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:32) at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:521) at java.sql.DriverManager.getConnection(DriverManager.java:582) at java.sql.DriverManager.getConnection(DriverManager.java:185) at DBPing.main(DBPing.java:13)
second edition:
import java.sql.DriverManager; import java.sql.SQLException; import java.util.TimeZone; public class DBPing { public static void main(String[] args) throws ClassNotFoundException, SQLException { TimeZone timeZone = TimeZone.getTimeZone("Europe/Zurich"); TimeZone.setDefault(timeZone); Class.forName("oracle.jdbc.driver.OracleDriver"); System.out.println("length " + args.length); String user = args[0]; String password = args[1]; String url = args[2]; String now = new java.util.Date().toString(); System.out.println(now + " user= " + user + " password=" + password + " url=" + url); java.sql.Connection conn = DriverManager.getConnection(url, user, password); System.out.println("ok"); } }
by setting the timezone we fix the error...
Setting "oracle.jdbc.timezoneAsRegion=false" in the WebLogic Datasource properties also does the job. This probably is the least impact solution, as it affects only the DB...
CONNECTION_PROPERTY_TIMEZONE_AS_REGION static final java.lang.String CONNECTION_PROPERTY_TIMEZONE_AS_REGION Use JVM default timezone as specified rather than convert to a GMT offset. Default is true.
Tuesday, November 4, 2014
Book: A Splendid Exchange: How Trade Shaped the World
This is a very enjoyable book, showing how trade shaped history, from Phoenician times to http://en.wikipedia.org/wiki/Zheng_He, Marco Polo, the British pirates and the http://en.wikipedia.org/wiki/Dutch_East_India_Company...
The only drawback is that it covers too many subjects, and finally none in a very specialized form. And of course his positive view of modern trade is totally delusional, and doesn't take into account the immense toll on environment.
Labels:
books
Wednesday, October 22, 2014
bash: redirect stdout and stderr for a block of "code"
(I was hesitant to call "code" a bash script, as it's mostly molasses of un-refactorable hieroglyphs)
vi redirtest.sh
But only that block will be redirected to plutti.log.
Same story for pippi.log: you have 2 separate error logs for the 2 blocks of code.
So, not necessarily redirection has to be at the whole script level, or at the single statement level.... one can group several statements in a "try/catch" block, which is cool...IMHO at least, it gives more flexibility ...
vi redirtest.sh
{ plutti plitti } > plutti.log 2>&1 { pippi } > pippi.log 2>&1 { echo ciao } > ciao.txt { echo miao } > miao.txtThe command "plutti" and "plitti" don't exist, so I will have an error "./redirtest.sh: line 2: plutti: command not found ./redirtest.sh: line 2: plitti: command not found".
But only that block will be redirected to plutti.log.
Same story for pippi.log: you have 2 separate error logs for the 2 blocks of code.
So, not necessarily redirection has to be at the whole script level, or at the single statement level.... one can group several statements in a "try/catch" block, which is cool...IMHO at least, it gives more flexibility ...
Labels:
bash
Tuesday, October 21, 2014
WebLogic password change
quick miscellaneous notes on the topic... it's VERY easy to get the procedure wrong...
http://middlewaremagic.com/weblogic/?p=323
BEA-090078 User weblogic in security realm myrealm has had 5 invalid login attempts, locking account for 30 minutes.
See also: Problem Resetting the Weblogic User Password (Doc ID 1589360.1)
BEA-090078 User weblogic in security realm myrealm has had 5 invalid login attempts, locking account for 30 minutes.
See also: Problem Resetting the Weblogic User Password (Doc ID 1589360.1)
Tried to reset the weblogic users password with these steps: a. Stopped Weblogic Server instance. b. Make a backup of the LDAP folder of the admin server as well as managed c. Set your environment variables by running setDomainEnv.sh (UNIX):. ./setDomainEnv.sh e. cd to security directory in your instance.(eg: $WL_HOME/user_projects/domains/base_domain/security) d. Run: java weblogic.security.utils.AdminAccount weblogic password . e. After running the command, the file “DefaultAuthenticatorInit.ldift” will get updated. f. Delete the following file from “ldap” folder: DefaultAuthenticatormyrealmInit.initialized g. Edit the boot.properties file and change the password to the value already used on step d. Started NodeManager and WebLogic Server. Log files show ‘Running’ but can not display the WLS Admin Console log in Screen. Changes Updated the password for the weblogic user. Run: java weblogic.security.utils.AdminAccount weblogic hello . Don’t forget the period “ .” at the end of the above command, it is requiredRemember also
Reset the node manager password in WLS Admin Console. Reference the note: WebLogic - Getting exception in WLST "weblogic.nodemanager.NMException: Access to domain for user denied" (Doc ID 889842.1) A. Log in to the WLS Admin Console B. In the Domain Structure navigation window of the left click on the Domain name. C. Click on the 'Security' link on the right hand side D. Click the Advanced link near the bottom E. Update the 'NodeManager Password' to be the new password you created for weblogic > Save.
Labels:
weblogic
Sunday, October 19, 2014
Book: Jenkins, the definitive guide
This is a great reading, the book covers not only Jenkins, but also the vast ecosystem around Continuous Integration (maven, nexus..); all in a very readable way, without indulging too much in the gory details.
It's even downloadable for free from the above Wakaleo URL. Have fun!
Labels:
books
Wednesday, October 15, 2014
WebLogic, auditing invalid login attempts
I was getting such messages in the log, after changing weblogic password:
####<Oct 15, 2014 9:15:47 PM CEST> <Notice> <Security> <acme105> <osbpp1ms1> <[ACTIVE] ExecuteThread: '29' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <553e43a3c186ec6d:-ae5bdb3:149153b2e29:-8000-0000000000000068> <1413400547980> <BEA-090078> <User weblogic in security realm myrealm has had 5 invalid login attempts, locking account for 30 minutes.>
I was unable to trace the origin of this invalid login, until I setup a DefaultAuditRecorder:
http://docs.oracle.com/cd/E13222_01/wls/docs90/secmanage/providers.html
myrealm Providers Auditing New
Add these :
com.bea.contextelement.channel.Address
com.bea.contextelement.channel.ChannelName
com.bea.contextelement.channel.Port
com.bea.contextelement.channel.Protocol
com.bea.contextelement.channel.PublicAddress
com.bea.contextelement.channel.PublicPort
com.bea.contextelement.channel.RemoteAddress
com.bea.contextelement.channel.RemotePort
com.bea.contextelement.channel.Secure
and restart the server. Then you do
less /opt/oracle/domains/osbpp1do/servers/osbpp1ms1/logs/DefaultAuditRecorder.log
So the client's address is 10.56.10.188 and the remote port is 53443. I go on that box and I do
so if you are root you can find the PID of the offending process.
####<Oct 15, 2014 9:15:47 PM CEST> <Notice> <Security> <acme105> <osbpp1ms1> <[ACTIVE] ExecuteThread: '29' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <553e43a3c186ec6d:-ae5bdb3:149153b2e29:-8000-0000000000000068> <1413400547980> <BEA-090078> <User weblogic in security realm myrealm has had 5 invalid login attempts, locking account for 30 minutes.>
I was unable to trace the origin of this invalid login, until I setup a DefaultAuditRecorder:
http://docs.oracle.com/cd/E13222_01/wls/docs90/secmanage/providers.html
myrealm Providers Auditing New
Add these :
com.bea.contextelement.channel.Address
com.bea.contextelement.channel.ChannelName
com.bea.contextelement.channel.Port
com.bea.contextelement.channel.Protocol
com.bea.contextelement.channel.PublicAddress
com.bea.contextelement.channel.PublicPort
com.bea.contextelement.channel.RemoteAddress
com.bea.contextelement.channel.RemotePort
com.bea.contextelement.channel.Secure
and restart the server. Then you do
less /opt/oracle/domains/osbpp1do/servers/osbpp1ms1/logs/DefaultAuditRecorder.log
#### Audit Record Begin <Oct 15, 2014 9:16:27 PM> <Severity =FAILURE> <<<Event Type = Authentication Audit Event><weblogic><AUTHENTICATE>>> <FailureException =javax.security.auth.login.FailedLoginException: [Security:090304]Authentication Failed: User weblogic javax.security.auth.login.FailedLoginException: [Security:090302]Authentication Failed: User weblogic denied> <<<CONTEXTELEMENT: com.bea.contextelement.channel.Port: 8001 CONTEXTELEMENT>>> <<<CONTEXTELEMENT: com.bea.contextelement.channel.PublicPort: 8001 CONTEXTELEMENT>>> <<<CONTEXTELEMENT: com.bea.contextelement.channel.RemotePort: 53443 CONTEXTELEMENT>>> <<<CONTEXTELEMENT: com.bea.contextelement.channel.Protocol: t3 CONTEXTELEMENT>>> <<<CONTEXTELEMENT: com.bea.contextelement.channel.Address: pippo2-osbpp1ms1.acme.com CONTEXTELEMENT>>> <<<CONTEXTELEMENT: com.bea.contextelement.channel.PublicAddress: pippo2-osbpp1ms1.acme.com CONTEXTELEMENT>>> <<<CONTEXTELEMENT: com.bea.contextelement.channel.RemoteAddress: /10.56.10.188 CONTEXTELEMENT>>> <<<CONTEXTELEMENT: com.bea.contextelement.channel.ChannelName: Default[t3] CONTEXTELEMENT>>> Audit Record End ####
So the client's address is 10.56.10.188 and the remote port is 53443. I go on that box and I do
netstat -an | grep 53443 tcp 0 0 10.56.10.188:53443 10.56.10.183:8001 ESTABLISHED netstat --all --program | grep 53443 (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) tcp 0 0 acme106.acme53443 pippo2-osbpp1ms:vcom-tunnel ESTABLISHED -
so if you are root you can find the PID of the offending process.
Labels:
weblogic
Sunday, October 12, 2014
Mount nfs and rpcbind
I was trying to mount a NFS4 share:
mount -t nfs 10.0.2.15:/drbd/main/shared/ /opt/oracle/domains/osbpl1do/shared
mount.nfs: Connection timed out
Of course, I forgot to start the nfs service:
/etc/init.d/nfs start
rpcinfo -p
hell on Earth... let me restart the rpcbind service:
service rpcbind restart
rpcinfo -p
at last I can start the nfs service: /etc/init.d/nfs start
mount -t nfs 10.0.2.15:/drbd/main/shared/ /opt/oracle/domains/osbpl1do/shared
mount.nfs: Connection timed out
Of course, I forgot to start the nfs service:
/etc/init.d/nfs start
Starting NFS services: [ OK ] Starting NFS quotas: Cannot register service: RPC: Unable to receive; errno = Connection refused rpc.rquotad: unable to register (RQUOTAPROG, RQUOTAVERS, udp). [FAILED] Starting NFS mountd: [FAILED] Starting NFS daemon: rpc.nfsd: writing fd to kernel failed: errno 111 (Connection refused) rpc.nfsd: unable to set any sockets for nfsd [FAILED]Mmmmm, what is going on here..... I try to get some info on the rpc service:
rpcinfo -p
rpcinfo: can't contact portmapper: RPC: Remote system error - No such file or directory
hell on Earth... let me restart the rpcbind service:
service rpcbind restart
Stopping rpcbind: [FAILED] Starting rpcbind: [ OK ]
rpcinfo -p
program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper
at last I can start the nfs service: /etc/init.d/nfs start
Starting NFS services: [ OK ] Starting NFS quotas: [ OK ] Starting NFS mountd: [ OK ] Starting NFS daemon: [ OK ] Starting RPC idmapd: [ OK ]
Labels:
nfs
Thursday, October 2, 2014
Fun run of JClarity Censum - Garbage Collection analysis tool
I have downloaded the 14 days trial of JClarity Censum. It's EXTREMELY simple to use and gives you a quick feedback on your GC issues.
First, you HAVE to enable a few flags: -XX:+PrintTenuringDistribution and -XX:+PrintGCDetails (safe in PROD). Then you run your tests for some 24 hours and feed your GC log into Censum.
It gives you statistical info like:
Longest GC pause (I had 5 seconds, quite bad)
Percentage of time spent in GC pauses (mine is 9%, pretty bad, should be max 5%)
Average memory allocation rate (I had 90 MB / second, quite bad)
Max memory allocation rate (I had 900 MB / second, horrible)
Total Full GC pause time, and Total Pause time
Full GC to GC ratio (mine was 45%, pretty stiff)
Application throughput (percentage of time spent working, versus time spent in GC.... should be 95% or more)
The graphs are breathtaking...
All in all, they charge you a quite hefty (ANNUAL!) fee for this software... to perform analysis that doesn't seem all that complicated - I am sure I can figure out most of this stuff in Excel in a matter of hours!
However, if your time is precious, it seems a quite valuable tool...
First, you HAVE to enable a few flags: -XX:+PrintTenuringDistribution and -XX:+PrintGCDetails (safe in PROD). Then you run your tests for some 24 hours and feed your GC log into Censum.
It gives you statistical info like:
Longest GC pause (I had 5 seconds, quite bad)
Percentage of time spent in GC pauses (mine is 9%, pretty bad, should be max 5%)
Average memory allocation rate (I had 90 MB / second, quite bad)
Max memory allocation rate (I had 900 MB / second, horrible)
Total Full GC pause time, and Total Pause time
Full GC to GC ratio (mine was 45%, pretty stiff)
Application throughput (percentage of time spent working, versus time spent in GC.... should be 95% or more)
The graphs are breathtaking...
All in all, they charge you a quite hefty (ANNUAL!) fee for this software... to perform analysis that doesn't seem all that complicated - I am sure I can figure out most of this stuff in Excel in a matter of hours!
However, if your time is precious, it seems a quite valuable tool...
WebLogic: check which patches are applied
Just throwing all available technology at this issue....
Method 1: grep BEA-141107 in the logs (I don't get anything)
Method 2: bsu
cd /opt3/oracle/fmw11_1_1_5/utils/bsu
./bsu.sh -report -bea_home=/opt3/oracle/fmw11_1_1_5/
Method 3: opatch
export MW_HOME=/opt3/oracle/fmw11_1_1_5/
export ORACLE_HOME=/opt3/oracle/fmw11_1_1_5/osb
export JDK_HOME=/opt3/oracle/java
/opt3/oracle/fmw11_1_1_5/oracle_common/OPatch/opatch lsinv -all -jdk /opt3/oracle/java/ -invPtrLoc /opt/oracle/bin/software/oraInst_soa3.loc
Method 4: look in WebLogic console, Monitoring tab, you should have a list of patches
Since the patch was applied with opatch, it can only be detected by opatch. Strange thought that no trace of this patch appears in the logs...
I assume that bsu is basically dead now, and all patches should be installed with opatch... I am not an expert though...
Method 1: grep BEA-141107 in the logs (I don't get anything)
Method 2: bsu
cd /opt3/oracle/fmw11_1_1_5/utils/bsu
./bsu.sh -report -bea_home=/opt3/oracle/fmw11_1_1_5/
Patch Report ============ Report Info Report Options bea_home.................. /opt3/oracle/fmw11_1_1_5/ product_mask.............. ### OPTION NOT SET release_mask.............. ### OPTION NOT SET profile_mask.............. ### OPTION NOT SET patch_id_mask............. ### OPTION NOT SET Report Messages BEA Home.................. /opt3/oracle/fmw11_1_1_5 Product Description Product Name.............. Oracle Coherence Product Version........... 3.6.0.4 Installed Components...... Coherence Product Files Product Install Directory. /opt3/oracle/fmw11_1_1_5/coherence_3.6 Java Home................. null Jave Vendor............... null Java Version.............. null Patch Directory........... /opt3/oracle/fmw11_1_1_5/patch_ocp360 Product Description Product Name.............. WebLogic Server Product Version........... 10.3.5.0 Installed Components...... Core Application Server, Administration Console, Configuration Wizard and Upgrade
Framework, Web 2.0 HTTP Pub-Sub Server, WebLogic SCA, WebLogic JDBC Drivers, Third Party JDBC Drivers, WebLogic Server
Clients, WebLogic Web Server Plugins, UDDI and Xquery Support, Evaluation Database, Workshop Code Completion Support
Product Install Directory. /opt3/oracle/fmw11_1_1_5/wlserver_10.3 Java Home................. null Jave Vendor............... Sun Java Version.............. 1.6.0_24 Patch Directory........... /opt3/oracle/fmw11_1_1_5/patch_wls1035
Method 3: opatch
export MW_HOME=/opt3/oracle/fmw11_1_1_5/
export ORACLE_HOME=/opt3/oracle/fmw11_1_1_5/osb
export JDK_HOME=/opt3/oracle/java
/opt3/oracle/fmw11_1_1_5/oracle_common/OPatch/opatch lsinv -all -jdk /opt3/oracle/java/ -invPtrLoc /opt/oracle/bin/software/oraInst_soa3.loc
Invoking OPatch 11.1.0.8.2 Oracle Interim Patch Installer version 11.1.0.8.2 Copyright (c) 2010, Oracle Corporation. All rights reserved. Oracle Home : /opt3/oracle/fmw11_1_1_5/osb Central Inventory : /opt3/oracle/orainventory from : /opt/oracle/bin/software/oraInst_soa3.loc OPatch version : 11.1.0.8.2 OUI version : 11.1.0.9.0 OUI location : /opt3/oracle/fmw11_1_1_5/osb/oui Log file location : /opt3/oracle/fmw11_1_1_5/osb/cfgtoollogs/opatch/opatch2014-10-02_12-06-03PM.log Patch history file: /opt3/oracle/fmw11_1_1_5/osb/cfgtoollogs/opatch/opatch_history.txt OPatch detects the Middleware Home as "/opt3/oracle/fmw11_1_1_5" Lsinventory Output file location : /opt3/oracle/fmw11_1_1_5/osb/cfgtoollogs/opatch/lsinv/lsinventory2014-10-02_12-06-03PM.txt -------------------------------------------------------------------------------- List of Oracle Homes: Name Location OH827611496 /opt3/oracle/fmw11_1_1_5/oracle_common OH401177498 /opt3/oracle/fmw11_1_1_5/osb Installed Top-level Products (1): Oracle Service Bus 11.1.1.5.0 There are 1 products installed in this Oracle Home. Interim patches (1) : Patch 12362492 : applied on Mon Sep 15 16:37:11 CEST 2014 Unique Patch ID: 14334178 Created on 5 Dec 2011, 21:20:01 hrs PST8PDT Bugs fixed: 12362492 -------------------------------------------------------------------------------- OPatch succeeded.
Method 4: look in WebLogic console, Monitoring tab, you should have a list of patches
Since the patch was applied with opatch, it can only be detected by opatch. Strange thought that no trace of this patch appears in the logs...
I assume that bsu is basically dead now, and all patches should be installed with opatch... I am not an expert though...
WebLogic: Registered more than one instance with the same objectName
We keps getting an error in the logs:
until we restarted and it went away. No change was done in the configuration recently.
Looking in Oracle Support I found:
BEA-149500 Error Registered more than one instance with the same objectName using JDBC TLOG Store (Doc ID 1544879.1)
WebLogic Server tries to create a new MBean every time a PersistentStoreRuntimeMBean is called. This issue has been fixed in unpublished defect 16063328
Our MBean type is "JMSPooledConnectionRuntime" and not "PersistentStoreRuntime", but maybe the origin is the same... anyway I am not going to apply the patch, as this has occurred only once....
BEA-149500>(RuntimeMBeanDelegate.java:255) at weblogic.management.runtime.RuntimeMBeanDelegate. (RuntimeMBeanDelegate.java:215) at weblogic.management.runtime.RuntimeMBeanDelegate. (RuntimeMBeanDelegate.java:193) at weblogic.management.runtime.RuntimeMBeanDelegate. (RuntimeMBeanDelegate.java:182)
until we restarted and it went away. No change was done in the configuration recently.
Looking in Oracle Support I found:
BEA-149500 Error Registered more than one instance with the same objectName using JDBC TLOG Store (Doc ID 1544879.1)
WebLogic Server tries to create a new MBean every time a PersistentStoreRuntimeMBean is called. This issue has been fixed in unpublished defect 16063328
Our MBean type is "JMSPooledConnectionRuntime" and not "PersistentStoreRuntime", but maybe the origin is the same... anyway I am not going to apply the patch, as this has occurred only once....
Labels:
weblogic
Monday, September 29, 2014
Puppet: file recurse remote caveat
Puppet contains a beautiful "macro" file recurse remote, by which you keep a while directory (with subdirectories!) in sync with the content of a Puppet files repository. Great.
BUT. There is a BUT: if apart from maintaining the whole folder (say: /myfiles) with recurse-remote, you also maintain a file in a subfolder (say: /myfiles/myfolder/hello.txt), this breaks the whole synchronization of /myfiles/myfolder/. OTHER subfolders (say: /myfiles/myotherfolder/) will syncronize perfectly, but not /myfiles/myfolder/.
Now you know, so you can plan your workarounds to this - maybe unexpected - behavior.
BUT. There is a BUT: if apart from maintaining the whole folder (say: /myfiles) with recurse-remote, you also maintain a file in a subfolder (say: /myfiles/myfolder/hello.txt), this breaks the whole synchronization of /myfiles/myfolder/. OTHER subfolders (say: /myfiles/myotherfolder/) will syncronize perfectly, but not /myfiles/myfolder/.
Now you know, so you can plan your workarounds to this - maybe unexpected - behavior.
Labels:
puppet
Java JVM Flag PrintConcurrentLocks
When all hell break loose and you have threads hanging waiting for a lock, it's PARAMOUNT to be able to determine WHO is holding that lock.
By default, the option -XX:+PrintConcurrentLocks is not enabled. if you enable it, and you do a "kill -3 PID", you should get for each thread the list of locks being held. This option "should be" safe in PROD, despite of the Oracle warnings.
However, you can get the same info using
The "Locked ownable synchronizers:" bit is the one you would get extra by using the PrintConcurrentLocks flag.
Anyway this is only a superficial analysis... luckily working with OSB I never had to deal with locking issues... so I cannot claim to be an expert.
By default, the option -XX:+PrintConcurrentLocks is not enabled. if you enable it, and you do a "kill -3 PID", you should get for each thread the list of locks being held. This option "should be" safe in PROD, despite of the Oracle warnings.
However, you can get the same info using
jstack -l PID
"Reference Handler" daemon prio=10 tid=0x000000004e8b6000 nid=0x48c8 in Object.wait() [0x000000004104b000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x0000000782f36550> (a java.lang.ref.Reference$Lock) at java.lang.Object.wait(Object.java:485) at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116) - locked <0x0000000782f36550> (a java.lang.ref.Reference$Lock) Locked ownable synchronizers: - None 0x0000000782f36550>0x0000000782f36550>
The "Locked ownable synchronizers:" bit is the one you would get extra by using the PrintConcurrentLocks flag.
Anyway this is only a superficial analysis... luckily working with OSB I never had to deal with locking issues... so I cannot claim to be an expert.
Labels:
JVM
Saturday, September 27, 2014
Installing a NFS server, and creating a mountpoint, in 30 seconds
This on RHEL/CentOS. For lazyness simplicity I install client and server on the same box :o) whose IP is 10.0.2.15
Installing a NFS server:
http://www.howtoforge.com/setting-up-an-nfs-server-and-client-on-centos-5.5
yum install nfs-utils nfs-utils-lib
chkconfig --levels 235 nfs on
/etc/init.d/nfs start
mkdir /var/nfs
chmod 777 /var/nfs
vi /etc/exports
/var/nfs 10.0.2.15(rw,sync,no_subtree_check)
exportfs -a
Creating a mountpoint
mkdir /home/users
mount -t nfs 10.0.2.15:/var/nfs /home/users
echo ciao > /home/users/ciao.txt
cat /var/nfs/ciao.txt
ciao
done :o)
Useful links:
https://www.suse.com/communities/conversations/configuring-nfsv4-server-and-client-suse-linux-enterprise-server-10/
http://www.javamonamour.org/2014/10/mount-nfs-and-rpcbind.html
Installing a NFS server:
http://www.howtoforge.com/setting-up-an-nfs-server-and-client-on-centos-5.5
yum install nfs-utils nfs-utils-lib
chkconfig --levels 235 nfs on
/etc/init.d/nfs start
mkdir /var/nfs
chmod 777 /var/nfs
vi /etc/exports
/var/nfs 10.0.2.15(rw,sync,no_subtree_check)
exportfs -a
Creating a mountpoint
mkdir /home/users
mount -t nfs 10.0.2.15:/var/nfs /home/users
echo ciao > /home/users/ciao.txt
cat /var/nfs/ciao.txt
ciao
done :o)
Useful links:
https://www.suse.com/communities/conversations/configuring-nfsv4-server-and-client-suse-linux-enterprise-server-10/
http://www.javamonamour.org/2014/10/mount-nfs-and-rpcbind.html
Labels:
nfs
Book: Spinoza, a life
Brilliant biographical and historical book, splendidly depicting the Netherlands and the Jewish community in Holland in the 1600-1700 period.
Steven Nadler is one of the top world experts in Spinoza, and he gives an image of this great intellectual which is much more multifaceted and dynamic than the traditional image - tending to portray Spinoza as a very secluded and shy man. On the contrary, he had a very active social life and he was a great innovator in a lot of realms, mainly of Optics, Astronomy and Biology.
Labels:
books
Wednesday, September 24, 2014
Book: Unveiling India
This book tell stories of "ordinary" (actually, extraordinary) Indian women, in their daily struggle in a very sexist country.
The book was written in 1987 and things have changed a lot, however it's a wonderful testimony or real life, interviewing hundreds of women - mostly from low-income class - across the whole India.
"In a dust-filled yard I meet an old woman bent over a pile of dry palm leaves. 'Child, don't sit on the ground. Let me spread a mat for you,' she says without looking up, pulling from under her a tattered mat which I realize is the only one she owns. She is eighty years old and lives alone. Her three sons are married and have gone away to bigger villages. Who looks after her? She points to the pile of palm leaves and goes back to cleaning them."
Labels:
books
Tuesday, September 23, 2014
Puppet: mount a SAMBA share at boot
First, create a credentials file in your acme module:
acme\files\samba\myshare.smb.credentials
and inside put:
I was unable to make it mount an unprotected share..... so just have it protected by username/password.
Then in your acme init.pp read a boolean
$acme_install_sambamyshare = any2bool(hiera('acme_install_sambamyshare', false)),
Create a samba.pp class (file):
Don't forget to declare the samba class in your init.pp!
If you do cat /etc/fstab you should see something like this:
//windowshost/sharedfolder /data/myshare/ cifs credentials=/etc/myshare.smb.credentials,rw,nounix,iocharset=utf8,file_mode=0777,dir_mode=0777 0 0
If you want to do the same thing manually, this is the recipe:
yum install smbclient
//yum install cifs-utils for Ubuntu
/bin/mount -t cifs -o username=someuser,password=somepassword,rw,nounix,iocharset=utf8,file_mode=0777,dir_mode=0777 //windowshost/sharedfolder /data/myshare/
To remove a mountpoint, use umount:
umount /data/myshare/
it supports also a -f (force) option:
umount -f /data/myshare/
See also here
acme\files\samba\myshare.smb.credentials
and inside put:
username=myuser password=mypassword
I was unable to make it mount an unprotected share..... so just have it protected by username/password.
Then in your acme init.pp read a boolean
$acme_install_sambamyshare = any2bool(hiera('acme_install_sambamyshare', false)),
Create a samba.pp class (file):
# == Class: acme::samba # # Manages samba mounts # class acme::samba { if $acme::acme_install_sambamyshare { file { '/data/myshare/': ensure => directory, owner => "soa", group => 'soa', mode => '0775', } file { '/etc/myshare.smb.credentials': source => 'puppet:///modules/acme/samba/myshare.smb.credentials', owner => root, mode => '0700', } -> mount { 'myshare': name => '/data/myshare/', atboot => 'true', device => '//windowshost/sharedfolder', ensure => 'mounted', fstype => 'cifs', options => "credentials=/etc/myshare.smb.credentials,rw,nounix,iocharset=utf8,file_mode=0777,dir_mode=0777", require => [Package['samba-client'], Package['cifs-utils'], File['/data/myshare/'], File['/etc/myshare.smb.credentials']], } } }
Don't forget to declare the samba class in your init.pp!
If you do cat /etc/fstab you should see something like this:
//windowshost/sharedfolder /data/myshare/ cifs credentials=/etc/myshare.smb.credentials,rw,nounix,iocharset=utf8,file_mode=0777,dir_mode=0777 0 0
If you want to do the same thing manually, this is the recipe:
yum install smbclient
//yum install cifs-utils for Ubuntu
/bin/mount -t cifs -o username=someuser,password=somepassword,rw,nounix,iocharset=utf8,file_mode=0777,dir_mode=0777 //windowshost/sharedfolder /data/myshare/
To remove a mountpoint, use umount:
umount /data/myshare/
it supports also a -f (force) option:
umount -f /data/myshare/
See also here
OSB: extending a domain by adding extra Managed Servers
See Oracle document "WLST Script to Add a Managed Server to an Existing OSB Cluster (Doc ID 1450865.1)", the provide you a WLST script extendOSBDomain.py to do the job.
The script must:
So, on the whole, the job is not as simple as "clone an existing MS". incidentally, cloning doesn't reproduce exactly all settings of the original MS, for instance, log properties like "RotateLogOnStartup" and "limit number and size" are NOT copied, nor are logfilters (and forget about filestores, JMS resources etc).
See also this doc about scaling up.
The script must:
- Create the new MS and set its listen-address and port etc etc
- Alter the cluster to add the new MS to the cluster address
- Create the JMS resources ["FileStore", "WseeFileStore"] (filestore, Jmsserver), creating also the $DOMAIN_HOME/WseeFileStore_auto_* etc folders, the JMS module and queues ["wli.reporting.jmsprovider.queue", "wli.reporting.jmsprovider_error.queue", "wlsb.internal.transport.task.queue.email", "wlsb.internal.transport.task.queue.file", "wlsb.internal.transport.task.queue.ftp", "wlsb.internal.transport.task.queue.sftp","QueueIn"]
- create a SAF agent
So, on the whole, the job is not as simple as "clone an existing MS". incidentally, cloning doesn't reproduce exactly all settings of the original MS, for instance, log properties like "RotateLogOnStartup" and "limit number and size" are NOT copied, nor are logfilters (and forget about filestores, JMS resources etc).
See also this doc about scaling up.
Monday, September 15, 2014
Puppet: quick hiera setup
cat /home/soa/.puppet/hiera.yaml
cat /home/soa/common.yaml
To test if hiera is working:
this should print:
:hierarchy: - common :backends: - yaml :yaml: :datadir: '/home/soa'
cat /home/soa/common.yaml
install_cleanssbatchorder : true
To test if hiera is working:
puppet apply -e "if hiera('install_cleanssbatchorder', false) { notify { 'pippo':}} "
this should print:
notice: pippo notice: /Stage[main]//Notify[pippo]/message: defined 'message' as 'pippo' notice: Finished catalog run in 0.02 seconds
Sunday, September 14, 2014
Puppet: minimalistic custom fact
in your module, create a lib/facter folder.
There, create a customxpathversion.rb file containing this code:
where the file customxpathversion.txt contains "1.10"
Now if in your puppet code you do
you get 1.10
Of course one should handle errors etc. But it works and I don't have to bother about $LOAD_PATH. From command line "facter customxpathversion" will not work because probably it's not in the LOAD_PATH.
Here the whole documentation Update: you can use this custom fact from the facter command line if you set the FACTERLIB environment variable:
There, create a customxpathversion.rb file containing this code:
require 'puppet' Facter.add("customxpathversion") do setcode 'cat /opt/oracle/scripts/config/customxpathversion.txt' end
where the file customxpathversion.txt contains "1.10"
Now if in your puppet code you do
notify { "${::customxpathversion}" : }
you get 1.10
Of course one should handle errors etc. But it works and I don't have to bother about $LOAD_PATH. From command line "facter customxpathversion" will not work because probably it's not in the LOAD_PATH.
Here the whole documentation Update: you can use this custom fact from the facter command line if you set the FACTERLIB environment variable:
find / -name customxpathversion* /tmp/vagrant-puppet-3/modules-0/pippo/lib/facter/customxpathversion.rb ^C [soa@osb-vagrant ~]$ export FACTERLIB=/tmp/vagrant-puppet-3/modules-0/pippo/lib/facter/ [soa@osb-vagrant ~]$ facter customxpathversion 1.10
Labels:
puppet
Monday, September 8, 2014
WebLogic: Installing WebLogic 10.3.5 binaries and creating a domain in 2 minutes
Sometimes one need to create a throwaway domain in minutes....I hate having to remember all the small details....
how to create a weblogic domain
login to myhost
sudo su - soa
cd /opt/oracle/software
make sure wls1035_generic.jar is there
type "java -d64 -version"
if this gives 'java version "1.6.0_33"' -> OK
vi silent.xml
alias wlst="/opt/oracle/fmw11_1_1_5/wlserver_10.3/common/bin/wlst.sh"
vi createToolsDomain.py
run this:
wlst createToolsDomain.py
cd /opt/oracle/domains/toolsdomain
vi start.sh
nohup /opt/oracle/domains/toolsdomain/startWebLogic.sh > /opt/oracle/domains/toolsdomain/adminserver.out 2>&1 &
chmod 755 start.sh
Console in http://myhost.acme.com:7101/console/
how to create a weblogic domain
login to myhost
sudo su - soa
cd /opt/oracle/software
make sure wls1035_generic.jar is there
type "java -d64 -version"
if this gives 'java version "1.6.0_33"' -> OK
vi silent.xml
<?xml version="1.0" encoding="UTF-8"?> <bea-installer> <input-fields> <data-value name="BEAHOME" value="/opt/oracle/fmw11_1_1_5" /> <data-value name="WLS_INSTALL_DIR" value="/opt/oracle/fmw11_1_1_5/wlserver_10.3" /> <data-value name="COMPONENT_PATHS" value="WebLogic Server/Core Application Server|WebLogic Server/Administration Console|WebLogic Server/Configuration Wizard and Upgrade Framework|WebLogic Server/Web 2.0 HTTP Pub-Sub Server|WebLogic Server/WebLogic JDBC Drivers|WebLogic Server/Third Party JDBC Drivers|WebLogic Server/WebLogic Server Clients|WebLogic Server/WebLogic Web Server Plugins|WebLogic Server/UDDI and Xquery Support|Oracle Coherence/Coherence Product Files|Oracle Coherence/Coherence Examples"/> <data-value name="NODEMGR_PORT" value="5556" /> <data-value name="INSTALL_SHORTCUT_IN_ALL_USERS_FOLDER" value="no"/> <data-value name="LOCAL_JVMS" value="/usr/lib/jvm/java-1.6.0-sun-1.6.0.33.x86_64"/> </input-fields> </bea-installer>java -d64 -jar /opt/oracle/software/wls1035_generic.jar -mode=silent -silent_xml=silent.xml
alias wlst="/opt/oracle/fmw11_1_1_5/wlserver_10.3/common/bin/wlst.sh"
vi createToolsDomain.py
readTemplate('/opt/oracle/fmw11_1_1_5/wlserver_10.3/common/templates/domains/wls.jar') cd('Servers/AdminServer') set('ListenPort', 7101) set('ListenAddress','myhost.acme.com') cd('/') cd('Security/base_domain/User/weblogic') cmo.setPassword('bla') setOption('OverwriteDomain', 'true') writeDomain('/opt/oracle/domains/toolsdomain') closeTemplate() exit()
run this:
wlst createToolsDomain.py
cd /opt/oracle/domains/toolsdomain
vi start.sh
nohup /opt/oracle/domains/toolsdomain/startWebLogic.sh > /opt/oracle/domains/toolsdomain/adminserver.out 2>&1 &
chmod 755 start.sh
Console in http://myhost.acme.com:7101/console/
Labels:
weblogic
Saturday, September 6, 2014
WebLogic : awesome presentation on what is new in 12.1.3
http://download.oracle.com/tutorials/ecourse/fmw/wls/wls_12.1.3_new_features/presentation.html
I VERY HIGHLY recommend to go through this 2 hours presentation on the new features of WebLogic 12.1.3.
you will learn - amongst other stuff - about:
I VERY HIGHLY recommend to go through this 2 hours presentation on the new features of WebLogic 12.1.3.
you will learn - amongst other stuff - about:
- dynamic clusters and server templates
- automatic server migration
- REST interface for Administrative commands - instead of WLST (awesome!)
- Exalogic (replicated store, cooperative memory management, JDBC optimizations)
- Transactions
- JMS (integration with AQ JMS, migrateable servers)
- Security: separate keystore per channel, Virtual Users with X509
- Classloader Analyzer Tool (CAT)
- Managing resources with Enterprise Manager
Labels:
weblogic
Books: The Politics of Heroin, The History of the First World War
Just finished reading these 2 MASSIVE history books, the first tells a lot of really interesting stories on how geopolitical interests control the production and traffic of heroin in the world. Most of the book is about the Golden Triangle (Thailand, Laos, Myanmar) and it really helps to read at least the Wikipedia about these 3 countries before reading this book, because things are really intricate.
http://en.wikipedia.org/wiki/The_Politics_of_Heroin_in_Southeast_Asia
It's definitely a book you need to read twice to digest it.
This one is a very comprehensive coverage of the military social and political events before, during and after WW I. This war shaped the world and laid the foundations for WW II, so it's not really "old stuff", it's an account of how modern warfare got organized in an industrial way of killing.
The author is very well documented, and he exposes very well the subject. However I got the impression that he underestimates the role that International Finance had in preparing the war - my personal opinion is that events of that caliber are programmed several years in advance by the International Elites to reshape the world to their advantage, and to pile up massive profits in war-related business. USA intervened at the last minute, played only a relatively minor role, yet they almost unilaterally dictated the peace conditions - with great irritation of UK and France. They played basically the same trick in WW II. Hence I would really like to understand the role they played in preparing the war - even if they always pretended to be completely isolated from the Old Continent.
http://en.wikipedia.org/wiki/The_Politics_of_Heroin_in_Southeast_Asia
It's definitely a book you need to read twice to digest it.
This one is a very comprehensive coverage of the military social and political events before, during and after WW I. This war shaped the world and laid the foundations for WW II, so it's not really "old stuff", it's an account of how modern warfare got organized in an industrial way of killing.
The author is very well documented, and he exposes very well the subject. However I got the impression that he underestimates the role that International Finance had in preparing the war - my personal opinion is that events of that caliber are programmed several years in advance by the International Elites to reshape the world to their advantage, and to pile up massive profits in war-related business. USA intervened at the last minute, played only a relatively minor role, yet they almost unilaterally dictated the peace conditions - with great irritation of UK and France. They played basically the same trick in WW II. Hence I would really like to understand the role they played in preparing the war - even if they always pretended to be completely isolated from the Old Continent.
Labels:
books
Wednesday, September 3, 2014
WebLogic: moving JMS filestores around
If a domain dies, and the filestores contain precious JMS messages still to be processed, how do you handle this?
There is a "filestore export" utility, but sadly it doesn't have an equivalent "import" functionality.
We just made a silly experiment - silly because this is a total hack and unsupported by Oracle, yet it seems to work.
Shutdown both source and destination Managed servers. Physically replace the destination .DAT filestore with the source filestore (each managed server has its own filestore....so you have to make this operation for all filestores=managed servers) . Start the destination server. By miracle the JMS messages appear on the destination server. Of course you lose all the previous content of the destination filestore...
There is a "filestore export" utility, but sadly it doesn't have an equivalent "import" functionality.
We just made a silly experiment - silly because this is a total hack and unsupported by Oracle, yet it seems to work.
Shutdown both source and destination Managed servers. Physically replace the destination .DAT filestore with the source filestore (each managed server has its own filestore....so you have to make this operation for all filestores=managed servers) . Start the destination server. By miracle the JMS messages appear on the destination server. Of course you lose all the previous content of the destination filestore...
WebLogic: WLST timeout connecting on a secure port
Connecting to a WebLogic server on a SSL port, although highly recommended by Oracle, can have a major drawback: it's damn slow to establish connection.
AFAIK there is no way to speed this up. I have tried the customary "urandom" optimization, to no avail.
So I had to increase the "SSL Login Timeout" (server/configuration/tuning).
"Specifies the number of milliseconds that WebLogic Server waits for an SSL connection before timing out. SSL connections take longer to negotiate than regular connections."
in WLST:
AFAIK there is no way to speed this up. I have tried the customary "urandom" optimization, to no avail.
So I had to increase the "SSL Login Timeout" (server/configuration/tuning).
"Specifies the number of milliseconds that WebLogic Server waits for an SSL connection before timing out. SSL connections take longer to negotiate than regular connections."
in WLST:
serverName = 'osbts1do' cd('/Servers/' + serverName + '/SSL/' + serverName) cmo.setLoginTimeoutMillis(30000)
Tuesday, September 2, 2014
Python: run multiple processes in parallel and wait for completion
This code is quite minimalistic but it works really well. It employs a very old API which is compatible with Pythong 2.1 (WLST very old version of Python....)
from sets import Set processes = Set() for command in commandlistlist: print "running", command processes.add(os.popen(command)) #wait until all processes have completed for proc in processes: proc.read()
Labels:
python
Monday, September 1, 2014
sqlplus vs sqldeveloper
When I run a DDL script in SQLDeveloper, it just runs perfect.
Same script in SQL throws plenty of errors like:
ORA-00955: name is already used by an existing object
Incidentally, SQLPlus runs me completely crazy because it never tells you in a single line which TABLE is actually giving trouble, so it becomes really difficult to grep and report for errors...
Anyway, it turns out that if you run this in SQLPlus:
The problem is that I generate this DDL with a SQLDeveloper export, so I have little control on how this is being generated. I cannot unconditionally remove all / , because in the "create package" statement they are absolutely necessary. I should write a parsing script to conditionally remove the / when they belong to a "create table".
Being able to edit tables in SQLDeveloper and then export them is too convenient....I don't want to resort to having to code SQL manually...
One workaround could be inserting a table comment for each table, so that the / would be inserted after the "COMMENT ON TABLE" statement and not after the "CREATE TABLE" statement. Repeating a "COMMENT ON TABLE" doesn't generate any error.
See also this and better still this
An excellent solution is to upgrade to SQLDeveloper 4, which doesn't generate the ";\" sequence.
Check also the (quoting) checkbox called "Terminator" which when selected uses semicolons to terminate each statement (unquoting). It's in Tools/Database/Utilities/Export.
ORA-00955: name is already used by an existing object
Incidentally, SQLPlus runs me completely crazy because it never tells you in a single line which TABLE is actually giving trouble, so it becomes really difficult to grep and report for errors...
Anyway, it turns out that if you run this in SQLPlus:
CREATE TABLE "BUILD_POINTS" ( "ENV" VARCHAR2(2 BYTE), "SUCCESS" NUMBER, "POINTS" NUMBER ) ; /it interprets the final / as a "repeat last command", which obviously fails because the table was already created.
The problem is that I generate this DDL with a SQLDeveloper export, so I have little control on how this is being generated. I cannot unconditionally remove all / , because in the "create package" statement they are absolutely necessary. I should write a parsing script to conditionally remove the / when they belong to a "create table".
Being able to edit tables in SQLDeveloper and then export them is too convenient....I don't want to resort to having to code SQL manually...
One workaround could be inserting a table comment for each table, so that the / would be inserted after the "COMMENT ON TABLE" statement and not after the "CREATE TABLE" statement. Repeating a "COMMENT ON TABLE" doesn't generate any error.
See also this and better still this
An excellent solution is to upgrade to SQLDeveloper 4, which doesn't generate the ";\" sequence.
Check also the (quoting) checkbox called "Terminator" which when selected uses semicolons to terminate each statement (unquoting). It's in Tools/Database/Utilities/Export.
Labels:
sqldeveloper,
sqlplus
Friday, August 29, 2014
WebLogic: disabling JMS message consumption at startup
One can TEMPORARILY disable consumption from a JMS Destination (JMSServer / Active Destinations/Monitor) , but when you restart, the queue is enabled again. This in production can be fatal.
One can disable PERMANENTLY consumption with this setting in the queue configuration:
See http://docs.oracle.com/cd/E23943_01/apirefs.1111/e13952/taskhelp/jms_modules/distributed_queues/ConfigureUDQPauseMessages.html
One can disable PERMANENTLY consumption with this setting in the queue configuration:
See http://docs.oracle.com/cd/E23943_01/apirefs.1111/e13952/taskhelp/jms_modules/distributed_queues/ConfigureUDQPauseMessages.html
Wednesday, August 27, 2014
OSB and SLA alerts in Eclipse
When you define a SLA alert in the OSB Console (sbconsole) this xml is added you your .proxy file :
<ser:alertRules> <ser:alertRule name="slow" enabled="true"> <aler:description>slow</aler:description> <aler:AlertFrequency>every-time</aler:AlertFrequency> <aler:AlertSeverity>normal</aler:AlertSeverity> <aler:StopProcessing>false</aler:StopProcessing> <aler:Condition type="statistics"> <aler:config aggregation-interval="10" xsi:type="mon:monitoringConditionType" xmlns:mon="http://www.bea.com/wli/monitoring/alert/condition/monitoringstatistic"> <mon:monCondExpr> <mon:function>max</mon:function> <mon:lhs>Project$PVtest/PVtest/Transport/response-time</mon:lhs> <mon:lhs-operand-type xsi:nil="true"/> <mon:lhsDisplayName>Response Time</mon:lhsDisplayName> <mon:operator>></mon:operator> <mon:rhs>600000</mon:rhs> </mon:monCondExpr> </aler:config> </aler:Condition> <aler:AlertDestination ref="PVtest/pier"/> <aler:AlertSummary>slow</aler:AlertSummary> </ser:alertRule> </ser:alertRules>where PVtest/pier is an AlertDestination. The horrifying thing is that if I import into eclipse this sbconfig.jar, the alertRules is emptied. You can find some reference in the comments here and here. Of course I am shocked and horrified, because this is horrible and horrific. So if you want to have those SLA, you must be very careful with Eclipse....
Labels:
OSB
Sunday, August 24, 2014
Videos on how to upgrade to SoaSuite 12c
Oracle official documentation on how to install or upgrade
Really excellent presentation covering all the steps to upgrade your domains to 12c, including DB migration:
Really excellent presentation covering all the steps to upgrade your domains to 12c, including DB migration:
Labels:
soasuite12
Video Tutorial on Oracle Service Bus 12 development in JDeveloper
it's soooo much easier than in the old Eclipse IDE.... here everything is integrated (IDE, running embedded WebLogic server, Test Console, logs) and you can create a WSDL using an inline wizard.... it LOOKS like development is a lot easier ....
many thanks to Marcelo Jabali for the video
Labels:
soasuite12
Saturday, August 23, 2014
More books: When Nietzsche wept, This time we went too far
When Nietzsche Wept very enjoyable novel by Irvin Yalom, about the life and work of 3 extraordinary Austrian and German intellectuals: Friedrich Nietzsche, Josef Breuer and Sigmund Freud.
"This time we went too far" is the harrowing account of the Cast Lead operation in Gaza 2008, by Norman Finkelstein. A must read, very scholar and accurate. Lot of details on the Goldstone report.
Of the same author, the somehow more specialistic "Goldstone Recants", on the "weird" recantation of Goldstone.
"This time we went too far" is the harrowing account of the Cast Lead operation in Gaza 2008, by Norman Finkelstein. A must read, very scholar and accurate. Lot of details on the Goldstone report.
Of the same author, the somehow more specialistic "Goldstone Recants", on the "weird" recantation of Goldstone.
Labels:
books
Friday, August 22, 2014
Solidarity with the children of Gaza
In case you want to help stitch together the children that the Israeli "Defense" Force have so surgically torn apart, you can send money to:
Account details:
Account Name: Bethlehem Arab Society for Rehabilitation
Account No. 1101-024285-111
IBAN: FR76 1797 9000 0100 0242 8511 172
Bank details:
Europe Arab Bank plc
26, Avenue des Champs E’lyse’es
75365 Paris Cedex 08
Tel: 33 1 45 61 60 00
Fax: 33 1 42 89 09 78
Swift: ARABFRPP
I have just sent 200 CHF...
Account details:
Account Name: Bethlehem Arab Society for Rehabilitation
Account No. 1101-024285-111
IBAN: FR76 1797 9000 0100 0242 8511 172
Bank details:
Europe Arab Bank plc
26, Avenue des Champs E’lyse’es
75365 Paris Cedex 08
Tel: 33 1 45 61 60 00
Fax: 33 1 42 89 09 78
Swift: ARABFRPP
I have just sent 200 CHF...
Labels:
palocaust
Wednesday, August 20, 2014
Git, configuring SSH on Windows
a great feature of git ssh is that, when it fails, you are left with very vague error messages that can point to 1000s of different causes. It could hardly stink more. Really only for the expert.
c:\pierre>git clone ssh://git@stash.acme.com:7999/pup/puppet-pippov2.git
Cloning into 'puppet-pippov2'...
Could not create directory '/c/WINDOWS/system32/config/systemprofile/.ssh'.
The authenticity of host '[stash.acme.com]:7999 ([10.56.8.198]:7999)' can't be established.
RSA key fingerprint is fb:09:c7:a7:79:07:99:35:3a:88:51:18:59:a6:8e:75.
Are you sure you want to continue connecting (yes/no)? yes
Failed to add the host to the list of known hosts (/c/WINDOWS/system32/config/systemprofile/.ssh/known_hosts).
Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I really have no clue where this systemprofile is coming from... must be some Windows default...
c:\pierre>echo %userprofile%
C:\Users\Pierluigi
c:\pierre>echo %HOME%
C:\WINDOWS\system32\config\systemprofile
Aha, see: the %HOME% variable is set to something WRONG.
I tried EXPLICITLY setting the %HOME% variable as a SYSTEM environment property (not a USER.... it didn't work....):
c:\pierre\vagrant>echo %HOME%
C:\Users\Pierluigi\
Now it all works. It can create the known_hosts file.
Windows stinks.
git and ssh, two skunks |
c:\pierre>git clone ssh://git@stash.acme.com:7999/pup/puppet-pippov2.git
Cloning into 'puppet-pippov2'...
Could not create directory '/c/WINDOWS/system32/config/systemprofile/.ssh'.
The authenticity of host '[stash.acme.com]:7999 ([10.56.8.198]:7999)' can't be established.
RSA key fingerprint is fb:09:c7:a7:79:07:99:35:3a:88:51:18:59:a6:8e:75.
Are you sure you want to continue connecting (yes/no)? yes
Failed to add the host to the list of known hosts (/c/WINDOWS/system32/config/systemprofile/.ssh/known_hosts).
Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I really have no clue where this systemprofile is coming from... must be some Windows default...
c:\pierre>echo %userprofile%
C:\Users\Pierluigi
c:\pierre>echo %HOME%
C:\WINDOWS\system32\config\systemprofile
Aha, see: the %HOME% variable is set to something WRONG.
I tried EXPLICITLY setting the %HOME% variable as a SYSTEM environment property (not a USER.... it didn't work....):
c:\pierre\vagrant>echo %HOME%
C:\Users\Pierluigi\
Now it all works. It can create the known_hosts file.
Windows stinks.
Git feature branch
More and more we are using "feature branch" rather than committing directly to master.
http://martinfowler.com/bliki/FeatureBranch.html
First, let's verify on which branch we are:
git checkout
Your branch is up-to-date with 'origin/master'.
and which branches are available in the current repo:
git branch
* master
it doesn't hurt to explicitly make sure we are on master:
git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
and if needed, let's get the latest from the central repo:
git pull --rebase
we can now create a branch:
git checkout -b feature/pippo
and make sure the branch is actually created:
git branch
* feature/pippo
master
since I am paranoid, I again switch to this new branch:
git checkout feature/pippo
Already on 'feature/pippo'
I create a file:
echo ciao > ciao.txt
and
git add ciao.txt
and commit
git commit -m "added ciao"
if you switch to "master", ciao.txt disappears, because it's only in the "feature/pippo" branch:
git checkout master
ciao.txt is gone!
git checkout feature/pippo
ciao.txt is there again!
Now we need to push the branch to the central repo:
git push
fatal: The current branch feature/pippo has no upstream branch.
To push the current branch and set the remote as upstream, use
git push --set-upstream origin feature/pippo
you can simply do:
git push -u origin HEAD
How to promote to master the changes in "feature/pippo" ? With a pull request, followed by a code review and a test! In github or stash you can create the pull request using the UI. This is a very cool and organized way to include changes to the master branch in a controlled, "atomic" way - a whole branch is a unit of changes that can be excluded as a whole if needed.
http://martinfowler.com/bliki/FeatureBranch.html
First, let's verify on which branch we are:
git checkout
Your branch is up-to-date with 'origin/master'.
and which branches are available in the current repo:
git branch
* master
it doesn't hurt to explicitly make sure we are on master:
git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
and if needed, let's get the latest from the central repo:
git pull --rebase
we can now create a branch:
git checkout -b feature/pippo
and make sure the branch is actually created:
git branch
* feature/pippo
master
since I am paranoid, I again switch to this new branch:
git checkout feature/pippo
Already on 'feature/pippo'
I create a file:
echo ciao > ciao.txt
and
git add ciao.txt
and commit
git commit -m "added ciao"
if you switch to "master", ciao.txt disappears, because it's only in the "feature/pippo" branch:
git checkout master
ciao.txt is gone!
git checkout feature/pippo
ciao.txt is there again!
Now we need to push the branch to the central repo:
git push
fatal: The current branch feature/pippo has no upstream branch.
To push the current branch and set the remote as upstream, use
git push --set-upstream origin feature/pippo
you can simply do:
git push -u origin HEAD
Counting objects: 6, done. Delta compression using up to 8 threads. Compressing objects: 100% (2/2), done. Writing objects: 100% (3/3), 273 bytes | 0 bytes/s, done. Total 3 (delta 1), reused 0 (delta 0) To ssh://git@stash.acme.com:7999/gs/playgroup.git * [new branch] HEAD -> feature/pippo Branch feature/pippo set up to track remote branch feature/pippo from origin.
How to promote to master the changes in "feature/pippo" ? With a pull request, followed by a code review and a test! In github or stash you can create the pull request using the UI. This is a very cool and organized way to include changes to the master branch in a controlled, "atomic" way - a whole branch is a unit of changes that can be excluded as a whole if needed.
Labels:
git
Thursday, August 14, 2014
Git detached HEAD
Normally I do all my git operations from command line. It might be more effort than using Eclipse/Geppetto plugin, but at least I know exactly what I am doing.
Recently I have made the terrible mistake of moving some files within the Geppetto (Eclipse) IDE.
Suddenly, the "HEAD detached" or "detached HEAD" message started appearing in my "git status":
git status
and when I commit stuff, it sure succeeds but it's in the detached head, not in my master branch:
git commit -a -m " moved domains files"
and when I try to push, I obviously get the message
git push
equally, pull --rebase will fail:
git pull --rebase
So basically I am screwed, unless I can rejoin my master branch and put in it all the commits I have done in the meantime in the detached HEAD. Bugger. I really wished git gave some BIGGER warning, since you don't really want to work normally in a detached HEAD.
I do this: first I join again my master branch:
git checkout master
ok this tells me that c67cdde is the hash of my last commit in detached HEAD.... let's go back to it:
git checkout c67cdde
then I create a new branch in which to commit that stuff:
git checkout -b tmp
then I go back to master
git checkout master
and I merge the tmp branch
git merge tmp
I will confirm now that my latest commits are there in master:
git log -3
and I delete the now useless tmp branch git branch -d tmp
Recently I have made the terrible mistake of moving some files within the Geppetto (Eclipse) IDE.
Suddenly, the "HEAD detached" or "detached HEAD" message started appearing in my "git status":
git status
HEAD detached at 05eff15
and when I commit stuff, it sure succeeds but it's in the detached head, not in my master branch:
git commit -a -m " moved domains files"
[detached HEAD c67cdde] moved domains files 3 files changed, 82 insertions(+)
and when I try to push, I obviously get the message
git push
fatal: You are not currently on a branch. To push the history leading to the current (detached HEAD) state now, use git push origin HEAD:
equally, pull --rebase will fail:
git pull --rebase
From ssh://stash.acme.com:7999/pup/puppet-nesoav2 14bba35..05eff15 master -> origin/master You are not currently on a branch. Please specify which branch you want to rebase against. See git-pull(1) for details. git pull
So basically I am screwed, unless I can rejoin my master branch and put in it all the commits I have done in the meantime in the detached HEAD. Bugger. I really wished git gave some BIGGER warning, since you don't really want to work normally in a detached HEAD.
I do this: first I join again my master branch:
git checkout master
Warning: you are leaving 3 commits behind, not connected to any of your branches: c67cdde moved domains files4b31fcfdd3893db49 1ab0beb moved domains filesrnetto@nespresso.com> ccdbcd0 usr files1:57:15 2014 +0200
ok this tells me that c67cdde is the hash of my last commit in detached HEAD.... let's go back to it:
git checkout c67cdde
Note: checking out 'c67cdde'. You are in 'detached HEAD' state.
then I create a new branch in which to commit that stuff:
git checkout -b tmp
Switched to a new branch 'tmp'
then I go back to master
git checkout master
and I merge the tmp branch
git merge tmp
Updating 05eff15..c67cdde Fast-forward
I will confirm now that my latest commits are there in master:
git log -3
and I delete the now useless tmp branch git branch -d tmp
Labels:
git
Wednesday, August 13, 2014
default: VirtualBox VM is already running.
I have cloned an existing Vagrantfile, changed the properties "config.vm.box" and "config.vm.host_name", and done a "vagrant up" hoping to have a brand new VM available. After all this is what Vagrant should be good at.
Nope. It keeps saying:
I had to do a "vagrant destroy" of my previously running VM, then a "vagrant up" of both.
There you have a workaround.... however I am quite skeptical that Vagrant "fails" on such a basic operation... I am NOT impressed... I am unconditionally unimpressed by whatever turns around Ruby, the language is way too unstable and buggish when you compare to Java... I really wish all this Puppet/Vagrant world had not used Ruby as implementation language...
Nope. It keeps saying:
Bringing machine 'default' up with 'virtualbox' provider... ==> default: VirtualBox VM is already running.
I had to do a "vagrant destroy" of my previously running VM, then a "vagrant up" of both.
ruby --version ruby 2.0.0p353 (2013-11-22) [x64-mingw32] vagrant --version Vagrant 1.6.3
There you have a workaround.... however I am quite skeptical that Vagrant "fails" on such a basic operation... I am NOT impressed... I am unconditionally unimpressed by whatever turns around Ruby, the language is way too unstable and buggish when you compare to Java... I really wish all this Puppet/Vagrant world had not used Ruby as implementation language...
Labels:
vagrant
Monday, August 11, 2014
Oracle DB check contraints based on regular expression
It's good to know that for "check constraint" you can use any arbitrary regular expression http://docs.oracle.com/cd/E11882_01/server.112/e17118/conditions007.htm#SQLRF52150 .
Example:
If you try inserting some crap you get:
If you get something like this below it means that your regex is bad or that the last parameter of REGEXP_LIKE is not legal (e.g. if you enter it UPPER CASE instead of lower case):
Example:
create table PIPPO (name varchar2 (32)); alter table PIPPO add constraint check_name check (REGEXP_LIKE(name,'bi|bo|ba','c'));
If you try inserting some crap you get:
Error starting at line 5 in command: INSERT INTO "CMDBUSER"."PIPPO" (NAME) VALUES ('bu') Error report: SQL Error: ORA-02290: check constraint (CMDBUSER.CHECK_NAME) violated 02290. 00000 - "check constraint (%s.%s) violated" *Cause: The values being inserted do not satisfy the named check *Action: do not insert values that violate the constraint.
If you get something like this below it means that your regex is bad or that the last parameter of REGEXP_LIKE is not legal (e.g. if you enter it UPPER CASE instead of lower case):
INSERT INTO "CMDBUSER"."PIPPO" (NAME) VALUES ('bu') ORA-01760: illegal argument for function ORA-06512: at line 1
Labels:
oracledb
The DB is the Foundation
I am surprised how little understood is still nowadays the importance of having clean, validated data in a DB.
DB schemas devoid of any NULL check, referential constraints and range checks are still commonplace, and management shuns attempts to enforce rigor perceiving it as "lack of flexibility". How flexible it is to have a "status" spelled in 5 different ways, or a date field represented as String and expressed in 10 different formats, or a boolean value expressed sometimes as True/False, other times as 0/1 and other times as YES/NO, I have no clue.
The DB is like the feet of your company, if feet are unstable, then your knees, your back, even your neck will start hurting and degenerating and making your life utterly miserable.
Statements like "we do all our validation in the front end" are ludicrous, ad history tell that Operations will make sure that a lot of DB changes will happen through custom scripts, independent UIs and even manually.
There are many analogies to DB constraints: control entropy (Physics), anti-cancerous cells (Biology), closed loop feedback (Control Systems), Secret Services (Fascist state - well all States are fascists inherently, so calling Fascist a State is a tautology)....
DB schemas devoid of any NULL check, referential constraints and range checks are still commonplace, and management shuns attempts to enforce rigor perceiving it as "lack of flexibility". How flexible it is to have a "status" spelled in 5 different ways, or a date field represented as String and expressed in 10 different formats, or a boolean value expressed sometimes as True/False, other times as 0/1 and other times as YES/NO, I have no clue.
The DB is like the feet of your company, if feet are unstable, then your knees, your back, even your neck will start hurting and degenerating and making your life utterly miserable.
Statements like "we do all our validation in the front end" are ludicrous, ad history tell that Operations will make sure that a lot of DB changes will happen through custom scripts, independent UIs and even manually.
There are many analogies to DB constraints: control entropy (Physics), anti-cancerous cells (Biology), closed loop feedback (Control Systems), Secret Services (Fascist state - well all States are fascists inherently, so calling Fascist a State is a tautology)....
Saturday, August 9, 2014
Book: The Ethnic cleansing of Palestine
Professor Ilan Pappé does a wonderful job at explaining in the most minute (and horrific) details the events leading to the foundation of the State of Israel. It was simply a very well prepared, large scale military operation of Land Grabbing and Ethnic Cleansing quite similar in its dynamics to the one perpetrated by the Nazi Einsatzkommandos in Eastern Europe.
Zionist soldiers would reach villages mostly in the night, launch a few grenades, kill some people and expel the others, mostly without meeting any resistance, then loot their houses, grab their land and bulldoze mosks.
To his honor, Pappe never makes the all-too-easy comparison Zionism = Nazism, recognizing that the sole aim of the Zionist was to expel the Palestinians from their homeland, not to exterminate them. Of course they had to do some massive killing, but the target was mostly adult males. Nor they established death factories of the like of Nazi KZ, normally the Palestinian males would be shot on the spot in front of their families, while all the rest of the population was simply driven out in long foot marches to the neighboring countries.
The book also dismounts the Zionist myth of the "Israel besieged by Arabs" and "second Holocaust", showing that the Zionist had always an overwhelming military supremacy, and full support from the British. Jordan was always quite cooperative with them, and the other Arab armies were very limited in size, weaponry and coordination, and they intervened way too late when already 1 million Palestinians had been robbed of their land.
What makes this book even more valuable, is that it was written by a Jew whose family escaped from Nazi prosecution. This blunts the weapons of those ready to call Nazi and Anti-Semitic whoever criticizes Israel policies. Go get it.
Thursday, August 7, 2014
HTTP GET in Java, performance issues
My first version was:
The above code proved to be EXTREMELY slow. The below version is MUCH faster:
import java.net.*; URL url = new URL(theURL); BufferedReader in = new BufferedReader(new InputStreamReader(url.openStream())); String wholeInput = ""; String inputLine = null; while ((inputLine = in.readLine()) != null) { wholeInput += inputLine; }
The above code proved to be EXTREMELY slow. The below version is MUCH faster:
URL url = new URL(theURL); HttpURLConnection conn = (HttpURLConnection) url.openConnection(); conn.setRequestMethod("GET"); BufferedReader in = new BufferedReader(new InputStreamReader(conn.getInputStream())); String wholeInput = ""; String inputLine = null; while ((inputLine = in.readLine()) != null) { wholeInput += inputLine; } in.close();
SOAP With Attachment with Java, working example
This code actually works , and produces a SOAM message with attachment of type application/octet-stream - which is totally acceptable, even if I would have preferred application/pdf.
I had tried this one before:
but it was failing with this error
For more info see:
http://docs.oracle.com/javaee/5/tutorial/doc/bnbhr.html
http://docs.oracle.com/javase/6/docs/api/javax/activation/FileDataSource.html
http://docs.oracle.com/javase/6/docs/api/javax/activation/DataHandler.html
I would rather say that it could be all a LOT simpler.... Java Java...too verbose...
import javax.xml.soap.*; import java.net.URL; import java.io.*; import org.xml.sax.*; import javax.xml.parsers.*; import org.w3c.dom.*; import javax.activation.*; // now all the SOAP code MessageFactory factory = MessageFactory.newInstance(); SOAPMessage message = factory.createMessage(); SOAPPart soapPart = message.getSOAPPart(); SOAPEnvelope envelope = soapPart.getEnvelope(); SOAPHeader header = envelope.getHeader(); DocumentBuilderFactory dbFactory = DocumentBuilderFactory.newInstance(); dbFactory.setNamespaceAware(true); DocumentBuilder builder = dbFactory.newDocumentBuilder(); // add the SOAP body as from text StringReader sr = new StringReader(textToSend); InputSource is = new InputSource(sr); Document document = builder.parse(is); SOAPBody body = envelope.getBody(); SOAPBodyElement docElement = body.addDocument(document); // add attachment from file String filename = theBUSINESSIDVALUE.split("\\^")[0]; File fileAttachment = new File(rootFolderForMSS + filename + ".pdf"); FileDataSource fds = new FileDataSource(fileAttachment); DataHandler dh = new DataHandler(fds); out.write("content-type=" + dh.getContentType() + "
"); AttachmentPart attachment = message.createAttachmentPart(dh); String contentId = "<" + filename + ".pdf>"; attachment.setContentId(contentId); message.addAttachmentPart(attachment); SOAPConnectionFactory soapConnectionFactory = SOAPConnectionFactory.newInstance(); SOAPConnection connection = soapConnectionFactory.createConnection(); java.net.URL endpoint = new URL(POSTTOURL); SOAPMessage soapResponse = connection.call(message, endpoint);
I had tried this one before:
AttachmentPart attachment = message.createAttachmentPart(); String filename = theBUSINESSIDVALUE.split("\\^")[0]; File fileAttachment = new File(rootFolderForMSS + filename + ".pdf"); byte[] fileData = new byte[(int) fileAttachment.length()]; DataInputStream dis = new DataInputStream(new FileInputStream(fileAttachment)); dis.readFully(fileData); dis.close(); String contentId = "<" + filename + ".pdf>"; attachment.setContent(new ByteArrayInputStream(fileData), "application/pdf"); attachment.setContentId(contentId); message.addAttachmentPart(attachment);
but it was failing with this error
javax.activation.UnsupportedDataTypeException: no object DCH for MIME type application/pdf at javax.activation.ObjectDataContentHandler.writeTo(DataHandler.java:877) at javax.activation.DataHandler.writeTo(DataHandler.java:302) at javax.mail.internet.MimeBodyPart.writeTo(MimeBodyPart.java:1403) at javax.mail.internet.MimeBodyPart.writeTo(MimeBodyPart.java:874) at javax.mail.internet.MimeMultipart.writeTo(MimeMultipart.java:444) at weblogic.xml.saaj.SOAPMessageImpl.writeMimeMessage(SOAPMessageImpl.java:959) at weblogic.xml.saaj.SOAPMessageImpl.writeTo(SOAPMessageImpl.java:832) at com.sun.xml.internal.messaging.saaj.client.p2p.HttpSOAPConnection.post(HttpSOAPConnection.java:237) at com.sun.xml.internal.messaging.saaj.client.p2p.HttpSOAPConnection.call(HttpSOAPConnection.java:144)
For more info see:
http://docs.oracle.com/javaee/5/tutorial/doc/bnbhr.html
http://docs.oracle.com/javase/6/docs/api/javax/activation/FileDataSource.html
http://docs.oracle.com/javase/6/docs/api/javax/activation/DataHandler.html
I would rather say that it could be all a LOT simpler.... Java Java...too verbose...
Wednesday, August 6, 2014
Java SOAP with Attachment in WebLogic
This is some code which more or less works:
The above code without any "body content" will generate this:
I got this error message:
Adding this in the setDomainEnv.sh for JAVA_OPTIONS seems to fix it:
See also document "Receive the Error "This Class Does Not Support SAAJ 1.1" when Running a P6 Web Services Application (Doc ID 1297252.1)" where they recommend:
import javax.xml.soap.*; import java.net.URL; String POSTTOURL = "http://yourhost:yourport/yourservice"; MessageFactory factory = MessageFactory.newInstance(); SOAPMessage message = factory.createMessage(); SOAPPart soapPart = message.getSOAPPart(); SOAPEnvelope envelope = soapPart.getEnvelope(); SOAPHeader header = envelope.getHeader(); SOAPBody body = envelope.getBody(); ... set the body content here ... SOAPConnectionFactory soapConnectionFactory = SOAPConnectionFactory.newInstance(); SOAPConnection connection = soapConnectionFactory.createConnection(); java.net.URL endpoint = new URL(POSTTOURL); SOAPMessage soapResponse = connection.call(message, endpoint); out.write("soapResponse=" + soapResponse.getSOAPBody().toString());
The above code without any "body content" will generate this:
<soapenv:Body xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"/>" operation="null" attachments="<con:attachments xmlns:con="http://www.bea.com/wli/sb/context"/>" outbound="null" fault="null" inbound="<con:endpoint name="BLA" xmlns:con="http://www.bea.com/wli/sb/context"> <con:service/> <con:transport/> <con:security/> </con:endpoint>" header="<soapenv:Header xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"/>
I got this error message:
This class does not support SAAJ 1.1
Adding this in the setDomainEnv.sh for JAVA_OPTIONS seems to fix it:
-Djavax.xml.soap.MessageFactory=weblogic.xml.saaj.MessageFactoryImpl
See also document "Receive the Error "This Class Does Not Support SAAJ 1.1" when Running a P6 Web Services Application (Doc ID 1297252.1)" where they recommend:
JAVA_OPTIONS="${SAVE_JAVA_OPTIONS} -Djavax.xml.soap.MessageFactory=com.sun.xml.messaging.saaj.soap.ver1_1.SOAPMessageFactory1_1Impl -Djavax.xml.soap.SOAPConnectionFactory=weblogic.wsee.saaj.SOAPConnectionFactoryImpl"
Subscribe to:
Posts (Atom)