OutOfMemoryError initializing HornetQ on OpenShift WildFly 8.2

Saw this error starting up my JMS queue on WildFly 8.2 running on OpenShift:

2015-01-02 21:03:46,417 WARN  [org.hornetq.ra] (default-threads - 1) HQ152005: Failure in HornetQ activation org.hornetq.ra.inflow.HornetQActivationSpec(ra=org.hornetq.ra.HornetQResourceAdapter@b1dce2 destination=jms/queue/spot destinationType=javax.jms.Queue ack=Auto-acknowledge durable=false clientID=null user=null maxSession=15): java.lang.OutOfMemoryError: unable to create new native thread
        at java.lang.Thread.start0(Native Method) [rt.jar:1.8.0_05]

Based on this thread, a suggestions was to reduce the JMS thread pool – my current config (possibly carried over from my prior deployment on WildFly 8.1)

<subsystem xmlns="urn:jboss:domain:messaging:2.0">            <hornetq-server>
   <journal-file-size>102400</journal-file-size>
   <thread-pool-max-size>${messaging.thread.pool.max.size}</thread-pool-max-size>
   <scheduled-thread-pool-max-size>${messaging.scheduled.thread.pool.max.size}</scheduled-thread-pool-max-size>

So based on the recommendation in the linked post above, I set thread-pool-max-size and scheduled-thread-pool-max-size to 20 and this fixed my OutOfMemory issue.

JAX-WS endpoint deployment issue on OpenShift WildFly 8.2

When attempting to deploy an app with a JAX-WS endpoint to WildFly on OpenShift, attempting to hit the ?wsdl url to check the generated wsdl gives this error:

22:49:49,502 ERROR [io.undertow.request] (default task-3) UT005023: Exception handling request to /ExampleEndpoint: javax.xml.ws.WebServiceException: JBWS024032: Cannot obtain endpoint jboss.ws:context=,endpoint=example.endpoint.ExampleEndpoint at org.jboss.wsf.stack.cxf.transport.ServletHelper.initServiceEndpoint(ServletHelper.java:82)

From some Googling, it appears this issue is related to the fact that on OpenShift your app is deployed as the root context and so you need to add a jboss-web.xml to define that your app is deployed at the root context (and not at /ROOT/, since the deployed app is ROOT.war) so your wsdl can be found at the expected url.

Updating the endpoint URL for a generated JAX-WS client

When you generate a JAX-WS client from a locally deployed service, the URL for the endpoint and the WSDL are hardcoded into the generated client code. If you redeploy the service elsewhere (i.e. moving from a local development environment to a test or production environment), then you could regenerate against the new URL for the service, but the JAX-WS api does allow you to programmatically specify the endpoint and wsdl locations.

From this post, the key is to use the BindingProvider to specify a new value for BindingProvider.ENDPOINT_ADDRESS_PROPERTY:

BindingProvider bp = (BindingProvider)port;
bp.getRequestContext().put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY, "http://your_url/ws/sample");

By itself though, this gave me an additional exception as it appears the client it still attempting to locate the WSDL. To work around this, you can also pass the new WSDL url location when you instantiate the generated client (this is discussed here). Before using the BindingProvider snippet above, pass the updated urls into the client like this:

EndpointService service = new EndpointService(
    new URL("http://your-url-to-endpoint/Endpoint?wsdl"),
    new QName("http://namespace-of-endpoint-from-wsdl/",
						"EndpointService"));

SpotCollectorEndpoint endpoint = service.getSpotCollectorEndpointPort();

 

Adding dependent jars to your OpenShift Maven repo – take 2

Looking at my last post when I last did this, it’s been a year since I last deployed something to OpenShift with my own custom dependent jars 🙂 And looks like some of the paths might have changed since last time, but the approach is still the same.

What worked for me this time:

mvn install:install-file -DgeneratePom=true 
  -Dfile=../../jar-file-name-inc-version-number.jar 
  -DgroupId=your-group-id -DartifactId=artifact-name 
  -Dversion=0.0.1-SNAPSHOT -Dpackaging=jar

Looks like the relative path to where the jar gets copied to in your remote account changed slightly since last time.

The odd thing is I still ran into issues with my prebuild script file losing it’s executable flag, and it never seems to run as part of my build, but I can run it manually my ssh’ing into my account and just running it by hand.