The env var $OPENSHIFT_MONGODB_DB_LOG_DIR points to the MongoDB log location.
OutOfMemoryError initializing HornetQ on OpenShift WildFly 8.2
Saw this error starting up my JMS queue on WildFly 8.2 running on OpenShift:
2015-01-02 21:03:46,417 WARN [org.hornetq.ra] (default-threads - 1) HQ152005: Failure in HornetQ activation org.hornetq.ra.inflow.HornetQActivationSpec(ra=org.hornetq.ra.HornetQResourceAdapter@b1dce2 destination=jms/queue/spot destinationType=javax.jms.Queue ack=Auto-acknowledge durable=false clientID=null user=null maxSession=15): java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method) [rt.jar:1.8.0_05]
Based on this thread, a suggestions was to reduce the JMS thread pool – my current config (possibly carried over from my prior deployment on WildFly 8.1)
<subsystem xmlns="urn:jboss:domain:messaging:2.0"> <hornetq-server> <journal-file-size>102400</journal-file-size> <thread-pool-max-size>${messaging.thread.pool.max.size}</thread-pool-max-size> <scheduled-thread-pool-max-size>${messaging.scheduled.thread.pool.max.size}</scheduled-thread-pool-max-size>
So based on the recommendation in the linked post above, I set thread-pool-max-size and scheduled-thread-pool-max-size to 20 and this fixed my OutOfMemory issue.
JAX-WS endpoint deployment issue on OpenShift WildFly 8.2
When attempting to deploy an app with a JAX-WS endpoint to WildFly on OpenShift, attempting to hit the ?wsdl url to check the generated wsdl gives this error:
22:49:49,502 ERROR [io.undertow.request] (default task-3) UT005023: Exception handling request to /ExampleEndpoint: javax.xml.ws.WebServiceException: JBWS024032: Cannot obtain endpoint jboss.ws:context=,endpoint=example.endpoint.ExampleEndpoint at org.jboss.wsf.stack.cxf.transport.ServletHelper.initServiceEndpoint(ServletHelper.java:82)
From some Googling, it appears this issue is related to the fact that on OpenShift your app is deployed as the root context and so you need to add a jboss-web.xml to define that your app is deployed at the root context (and not at /ROOT/, since the deployed app is ROOT.war) so your wsdl can be found at the expected url.
Adding dependent jars to your OpenShift Maven repo – take 2
Looking at my last post when I last did this, it’s been a year since I last deployed something to OpenShift with my own custom dependent jars 🙂 And looks like some of the paths might have changed since last time, but the approach is still the same.
What worked for me this time:
mvn install:install-file -DgeneratePom=true -Dfile=../../jar-file-name-inc-version-number.jar -DgroupId=your-group-id -DartifactId=artifact-name -Dversion=0.0.1-SNAPSHOT -Dpackaging=jar
Looks like the relative path to where the jar gets copied to in your remote account changed slightly since last time.
The odd thing is I still ran into issues with my prebuild script file losing it’s executable flag, and it never seems to run as part of my build, but I can run it manually my ssh’ing into my account and just running it by hand.