Glassfish Cluster SSH - Tutorial : How to create and configure a glassfish cluster with SSH (Part 2)

3. Glassfish cluster configuration

With our pre-requirements covered it's time to start configuring our cluster

A. Enabling remote access on the glassfish servers

By default the remote administration feature is disabled on the GlassFishServer This is to reduce your exposure to an attack from elsewhere in the network.

So if you attempt to administer the server remotely you will get a 403 - Forbidden HTTP status as the response. This is true regardless of whether you use the asadmin command, an IDE, or the REST interface.

You will need to start up the glassfish server on each one of your servers by executing the following command

$GFISH_HOME/bin/asadmin start-domain domain1

Once the servers are up you can turn remote administration on running the following command locally in each one of the servers (please ensure the glassfish instance is running when you execute the command) and when prompted enter the appropriate user/password :

$GFISH_HOME/bin/asadmin enable-secure-admin

This commands actually accomplishes 2 things it enables remote administration and encrypts all admin traffic.

In order for your modifications to be taken into account you will need to restart the glassfish server by running the following commands :

$GFISH_HOME/bin/asadmin stop-domain domain1

Once the sever is stopped you need to start it again so it can pick-up the remote admin parameter

You can now stop the running servers on all the nodes (except server1) by executing the above stop-domaincommand

Note : From this point forward all the operations should be executed on the server hosting the DAS(Domain Admin Server) in our case server1 so make sure it's running.


Tip : To avoid having to type your glassfish user/password when you run each command you can check my other post regarding the asadmin login command here that will prompt you once your your credentials and store them in an encrypted file under /home/gfish/.asadminpass


B. Creating the cluster nodes

A node represents a host on which the GlassFish Server software is installed.

A node must exist for every host on which GlassFish Server instances reside. A node's configuration contains information about the host such as the name of the host and the location where the GlassFish Server is installed on the host. For more info regarding glassfish nodes click here

The node for server1 is automatically created when you create your domain so we will create a node for each of the remaining servers (server2 & server3).

Creating the node2 on server2

[server1]$ $GFISH_HOME/bin/asadmin create-node-ssh --nodehost=server2 node2
Enter admin user name>  admin
Enter admin password for user "admin">
Command create-node-ssh executed successfully.

Creating the node3 on server3

[server1] $GFISH_HOME/bin/asadmin create-node-ssh --nodehost=server3 node3
Enter admin user name>  admin
Enter admin password for user "admin">
Command create-node-ssh executed successfully.

Note : If you followed my tip regarding the credentials storage your output might be slightly different (no user/password prompt)

C. Creating the cluster

Once the nodes created it's time for us to create our cluster


[server1]$ $GFISH_HOME/bin/asadmin create-cluster myCluster
Enter admin user name>  admin
Enter admin password for user "admin">
Command create-cluster executed successfully.

D. Creating the cluster instances

Now that we have the nodes and the cluster configured, we need to create the instances that will be part of the cluster.

a. Creating the local instance (instance that will be running on the same server as the DAS)

[server1]$ $GFISH_HOME/bin/asadmin create-local-instance --cluster myCluster --node localhost i_1
Command _create-instance-filesystem executed successfully.
Port Assignments for server instance i_1:
JMX_SYSTEM_CONNECTOR_PORT=28686
JMS_PROVIDER_PORT=27676
HTTP_LISTENER_PORT=28081
ASADMIN_LISTENER_PORT=24848
JAVA_DEBUGGER_PORT=29009
IIOP_SSL_LISTENER_PORT=23820
IIOP_LISTENER_PORT=23700
OSGI_SHELL_TELNET_PORT=26666
HTTP_SSL_LISTENER_PORT=28181
IIOP_SSL_MUTUALAUTH_PORT=23920
The instance, i_1, was created on host localhost
Command create-instance executed successfully.
b. Creating the remote instances

node2 --> i_2

[server1]$GFISH_HOME/bin/asadmin create-instance --cluster myCluster --node node2 i_2

node3 --> i_3

[server1]$GFISH_HOME/bin/asadmin create-instance --cluster myCluster --node node3 i_3
The message output should be similar to the one shown in the first code snippet (omitted for readability).

E. Working with the cluster

This section show a few useful commands when managing clusters

a. Showing clusters status
[server1]$GFISH_HOME/bin/asadmin list-clusters
myCluster running
b. Instance status

[server1]$ $GFISH_HOME/bin/asadmin list-instances
i_1   running
i_2   running
i_3   running
Command list-instances executed successfully.
c. Starting the cluster
/data/glassfish/bin/asadmin start-cluster myCluster
d. Stopping the cluster
$GFISH_HOME/bin/asadmin stop-cluster myCluster
d. Starting an instance
$GFISH_HOME/bin/asadmin start-instance i_1
d. Stopping an instance
$GFISH_HOME/bin/asadmin stop-instance i_1
e. Deploying a war to the cluster
$GFISH_HOME/bin/asadmin deploy --enabled=true --name=myApp:1.0 --target myCluster myApp.war

4. Configuring the glassfish as a daemon/service

In a production environment you will probably want your cluster to startup automatically (as a daemon/service) when your operating system boots

In this section I will show you how to create 2 possible scripts for starting/stopping a glassfish server under a Linux CentOS distribution (please note that depending on your Linux distribution this might not be the same)

In our cluster configuration there are mainly 2 types of instances :

  • DAS(Domain Admin Server) instance : This is our main instance it will handle the cluster configuration and propagate it to the cluster instances
  • Clustered instance : A clustered instance inherits its configuration from the cluster to which the instance belongs and shares its configuration with other instances in the cluster.

Please note that due to this difference the startup scripts for each instance type will be slightly different.

Whether we are configuring a clustered instance or DAS instance the script will be under :

/etc/init.d/glassfish
A. Creating the service script for the DAS (server1)

The following is an example start-up script for managing a glassfish DAS instance. It assumes the glassfish password is already stored in the .asadminpass file as show before.

#!/bin/bash
#
# chkconfig: 3 80 05
# description: Startup script for Glassfish
GLASSFISH_HOME=/opt/glassfish/bin;
GLASSFISH_OWNER=gfish;
GLASSFISH_DOMAIN=gfish;
CLUSTER_NAME=mycluster
export GLASSFISH_HOME GLASSFISH_OWNER GLASSFISH_DOMAIN CLUSTER_NAME
start() {
        echo -n "Starting Glassfish: "
        su $GLASSFISH_OWNER -c "$GLASSFISH_HOME/asadmin start-domain $GLASSFISH_DOMAIN"
        su $GLASSFISH_OWNER -c "$GLASSFISH_HOME/asadmin start-cluster $CLUSTER_NAME"
        echo "done"
}
stop() {
        echo -n "Stopping Glassfish: "
        su $GLASSFISH_OWNER -c "$GLASSFISH_HOME/asadmin stop-cluster $CLUSTER_NAME"
        su $GLASSFISH_OWNER -c "$GLASSFISH_HOME/asadmin stop-domain $GLASSFISH_DOMAIN"
        echo "done"
}
stop_cluster(){
     echo -n "Stopping glassfish cluster $CLUSTER_NAME"
     su $GLASSFISH_OWNER -c "$GLASSFISH_HOME/asadmin stop-cluster $CLUSTER_NAME"
     echo "glassfish cluster stopped"

}

start_cluster(){
     echo -n "Starting glassfish cluster $CLUSTER_NAME"
     su $GLASSFISH_OWNER -c "$GLASSFISH_HOME/asadmin start-cluster $CLUSTER_NAME"
     echo "Glassfish cluster started"
}
case "$1" in
        start)
                start
                ;;
        stop)
                stop
                ;;
        stop-cluster)
               stop_cluster
                ;;
       start-cluster)
              start_cluster
              ;;
        restart)
                stop
                start
                ;;
        *)
                echo $"Usage: Glassfish {start|stop|restart|start-cluster|stop-cluster}"
                exit
esac
B. Creating the service script for the cluster instances (server2,server3)

Below is an exemple script for configuring a glassfish cluster instance

#!/bin/bash
#
# chkconfig: 3 80 05
# description: Startup script for Glassfish
GLASSFISH_HOME=/opt/glassfish/bin;
GLASSFISH_OWNER=gfish;
NODE=n2
INSTANCE=i2

export GLASSFISH_HOME GLASSFISH_OWNER NODE INSTANCE
start() {
        echo -n "Starting Glassfish: "
        su $GLASSFISH_OWNER -c "$GLASSFISH_HOME/asadmin --user $GLASSFISH_ADMIN --passwordfile $GLASSFISH_PASSWORD  start-local-instance --node $NODE --sync normal $INSTANCE"
        echo "done"
}
stop() {
        echo -n "Stopping Glassfish: "
        su $GLASSFISH_OWNER -c "$GLASSFISH_HOME/asadmin stop-local-instance --node $NODE $INSTANCE"
        echo "done"
}
case "$1" in
        start)
                start
                ;;
        stop)
                stop
                ;;
        restart)
                stop
                start
                ;;
        *)
                echo $"Usage: Glassfish {start|stop|restart}"
                exit
esac

Warning : Since clustered instances cannot exist outside a cluster it's important that the local glassfish server is not started using the asadmin start-domain.If you do so you will probably loose all your cluster configuration for that instance.

You should always start the clustered instances with the command shown above

7. Troubleshooting & additional resources

I'm getting the message 'remote failure ...'


remote failure: Warning: some parameters appear to be invalid.
SSH node not created. To force creation of the node with these parameters rerun the command using the --force option.
Could not connect to host arte-epg-api1.sdv.fr using SSH.
The subsystem request failed.
The server denied the request.
Command create-node-ssh failed.

This is probably because the SFTP subsystem is not enabled on the target node. Please ensure that the following line is uncommented in the sshd_config file

Subsystem       sftp    /usr/libexec/openssh/sftp-server

Where can I find the glassfish startup/shutdown scripts ?

You can find these scripts over at my github account

Glassfish Cluster SSH - Tutorial : How to create and configure a glassfish cluster with SSH (Part 1)

0. Presentation

A cluster is a named collection of GlassFish Server instances that share the same applications, resources, and configuration information.

GlassFish Server enables you to administer all the instances in a cluster as a single unit from a single host, regardless of whether the instances reside on the same host or different hosts. You can perform the same operations on a cluster that you can perform on an unclustered instance, for example, deploying applications and creating resources.

In this we will configure a 3 node Glassfish cluster using SSH and using only the command-line util asadmin.

Below is a global picture of the target cluster structure :

The following are the conventions/assumptions that will be used throughout this tutorial

  • linux user : gfish
  • glassfish domain : domain1
  • glassfish cluster name : myCluster

1. Pre-requirements

A. SSH key configuration (Optional but used in this tutorial)

In order for the different instances of glassfish to communicate easily with each other it's a good idea to generate ssh keys for each of the users running the glassfish process (ideally the same user/password combination in each machine).

We will be generating the keys for the same user that runs the glassfish process. For this tutorial it will be assumed the user is gfish

a. Generating the ssh keys

On each of the machines (sever1, sever2, sever3) execute the following command, and leave the default options

[server] ssh-keygen -t rsa

Your output should be similar to the one below :

generating public/private rsa key pair.
Enter file in which to save the key (/home/gfish/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/gfish/.ssh/id_rsa.
Your public key has been saved in /home/gfish/.ssh/id_rsa.pub.
The key fingerprint is:
00:6a:ef:30:65:45:7a:15:1f:f0:65:6d:81:03:cd:1d gfish@server1
The key's randomart image is:
+--[ RSA 2048]----+
|    ..o +oo+o+Eo |
|   . + . o += +  |
|  o + o   o  o   |
| . + . .         |
|  o .   S        |
|   +             |
|    .            |
|                 |
|                 |
+-----------------+
b. Copying the ssh keys to the servers

Once your key is generated you will need to copy your each one of the generated server keys between the different servers allowing you to connect without entering a password.

For that you will need to execute the following command on each one of the servers by slightly modifying it to reflect the target server


[server1] cat /home/gfish/.ssh/id_rsa.pub | ssh gfish@server2 'cat >> .ssh/authorized_keys'

In this example we are copying the ssh key belonging to the server1 to the server2 authorized_keys file this will allow the user gfish to connect to the api1 server without using a password.

As stated before this operation needs to be repeated for each one of the servers by by adapting the syntax to your case

In the end your key distribution should be similar to :

server1 key
server2
server3
server2 key
server1
server3(optional)
server3 key
server1
server2(optional)
c. Authorized_keys2 (optional)

Depending on the ssh version you are using you will probably make a copy of the authorized_keys file and name it authorized_keys2.

This can be done by executing the following command (again on each one of your servers :

cp authorized_keys authorized_keys2

Once the previous steps are completed you should be able to log from one server to another through SSH without the need for a password.

Execute the following command to confirm it:

[server1]ssh gfish@server2

If you are prompted to enter a password when executing the previous command then something went wrong so you should recheck your config.

B. SSH Subsytem

When deploying applications on a cluster they are deployed to the cluster nodes using SFTP. You will need to ensure that the SFTP SSH subsytem is enabled in all of the machines that will be hosting the glassfish instances

To do this please ensure that your sshd_config file (probably under /etc/ssh/sshd_config) has the following line and that it's not commented :

Subsystem       sftp    /usr/libexec/openssh/sftp-server

Otherwise you may run into this kind of message errors when creating the ssh nodes (see troubleshooting)

Fix Did not receive a signal within 15.000000 seconds. Exiting... error message

I have installed in my macbook pro OSXFuse with the NTFS-3G add-on and up until the MacOS Lion update I had no particular issues with it

With the Lion update I started getting the following error when mounting NTFS drives :

Did not receive a signal within 15.000000 seconds. Exiting...

This is a known issue and the NTFS drive was properly mounted and working despite the error message, but I found the error message pretty annoying.. luckily there's a solution to it.

Head over to bfleischer's github fuse wait repo and follow the instructions to either install the patch manually (you'll need XCode to build the source file) or via a script that he has provided

ViewResolver vs MessageConverter Spring MVC when to use

Spring has lots of ways of handling view and content resolution, which is probably a good thing since it gives you flexibility but sometimes it can be a bit problematic.

Full disclosure here I must admit I had never given much attention to all of the ways views could be resolved, usually I went with a ViewResolver for handling my views and that was it; until recently though...

For one of my projects I needed some custom JSON handling so I had defined a custom ViewResolver by extending spring's org.springframework.web.servlet.view.json.MappingJackson2JsonView and defined as the default implementation for handling JSON views like so :




        
        
        
            
                
                
                

            
        
    

Everything was working as expected until I had to annotate some methods with @ResponseBody and now every method with this annotation was not being processed by my custom ViewResolver and I couldn't understand why..

As it turns out it's pretty simple (once you read the documentation again) when you annotate a method @ResponseBody spring will delegate the rendering of the view to a HttpMessageConverter and not a ViewResolver.

So to sum up :

Handler When to use
ViewResolvers when you are trying to resolve a view or the content you are trying to serve is in elsewhere(e.g. a controller method that returns the string "index" that will be mapped to the index.xhtml page)
HttpMessageConverters when you use the @ResponseBody annotations for returning the response directly from the method without mapping to an external file

MongoDB spring data $elemMatch in field projection

Sometimes when querying a MongoDB document what you will actually need is an item of a given document's embedded collections

The issue with MongoDB is that mongo will filter out only first level documents that correspond to your query but it will not filter out embedded documents that do not correspond to your query criteria (i.e. If you are trying to find a specific book in a book saga) you will get the complete 'parent' documents with the entire sub collection you were trying to filter

So what can you do when you wish to filter a sub-collection to extract 1 subdocument ?

Enter the $elementMatch operator. Introduced in the 2.1 version the $elemMatch projection operator limits the contents of an array field that is included in the query results to contain only the array element that matches the predicate expressed by the operator.

Let's say I have a model like the following :

   {
     "id": "1",
     "series": "A song of ice and Fire",
     "author" : "George R.R. Martin",
     "books" : [
           {
               "title" : "A Game of Thrones",
               "pubYear" : 1996,
               "seriesNumber" : 1
           },
           {
               "title" : "A Clash of Kings",
               "pubYear" : 1998,
               "seriesNumber" : 2
           },
           {
               "title" : "A Storm of Swords",
               "pubYear" : 2000,
               "seriesNumber" : 3
           },
           {
               "title" : "A Feast for Crows",
               "pubYear" : 2005,
               "seriesNumber" : 4
           },
           {
               "title" : "A Dance with Dragons",
               "pubYear" : 2011,
               "seriesNumber" : 5
           },           
           {
               "title" : "The Winds of Winter",
               "pubYear" : 2014,
               "seriesNumber" : 6
           },           
           {
               "title" : "A Dream of Spring",
               "pubYear" : 2100,
               "seriesNumber" : 7
           }

      ]
}

Now let's say for example that I'm interested only in the 5th book of a given book series (A Song of ice and fire for example)

This can be accomplished in a number of different ways :

  • MapReduce functions : Supported by Spring data but maybe a bit cumbersome for what we are trying to accomplish (I will write in the future a tutorial on how to use MongoDB MapReduce functions with Spring Data and making those functions "templatable"
  • Mongo's aggregation framework : Version 2.1 introduced the aggregation framework that lets you do a lot of stuff (more info here ) however the aggregation framework is not currently supported by Spring Data
  • the $elemMatch operator

But what is the $elemMatch operator, and how do you use it?

Well You could say $elemMatch is a multi-purpose operator as it can be used either as part of the query but also as part of the fields object, where it acts as a simplified MapReduce kind-of :)

But as always there is a caveat though when using $elemMatch operator, and that is that if more than 1 embedded document matches your criteria, only the first one will be returned as stated MongoDB documentation

Now when using spring data with mongodb you can relatively easily do field projection using the fields() object exposed by the Query class like so :


  Query query = Query.query(Criteria.where("series").is("A song of ice and Fire"));
  query.fields().include("books");

However the Query class, even though is very easy to use and manipulate, doesn't give you access to some of Mongo's more powerful mechanisms like for example the $elemMatch as a projection operator.

In order for you to use the $elemMatch operator as a projection constraint you need to use a subclass of org.springframework.data.mongodb.core.query.Query i.e. org.springframework.data.mongodb.core.query.BasicQuery

So let's get down to business with an example.

Let's say we are interested only in the 4th book out of George R.R. Martin's saga a Song of ice and fire

Now if you were using the traditional Query class you will probably end up writing a 2 step logic that would look something like this (unless you were using a MapReduce function)

1. Retrieve the parent Book document that correspond to your search criteria


/**
*Returns the parent book corresponding to the sagaName criteria with the unfiltered child collection books
*/
public Book findBookNumberInSaga(String sagaName, Integer bookNumber){

   Query query = Query.query(Criteria.where("series").is(sagaName).and("books.seriesNumber").is(bookNumber));

   MongoTemplate template = getMongoTemplate();
   
   return template.find(query);

}


2. From the parent document iterate through the books collection (or use LambdaJ :p ) to recover the book you a really interested in


public class WithoutElememMatch{

public static void main(String[] args){

     Book saga = findBookNumberInSaga("A song of ice and Fire", 4);
     Book numberFor = null;
      
    Iterator<Book> books = saga.getBooks();

        while (books.hasNext()){
            Book  currentBook= books.next();

             if(book.getSeriesNumber() == 4){
                numberFor = currentBook;
             }           
        }
 
 }
//...
}



Even though the previous example is certantly not the best way to implement this logic, it works fine and it serves it purpose

Now let's implement the same thing but this time we will make the database work for us :

1. Fetch the book corresponding to the requested criteria but make the database do the work for you

Here we ask MongoDB to filter the elements in the sub-collection books to the one matching our criteria (i.e. the number 4 in the series)


/**
* Returns the parent book corresponding to the sagaName criteria with a size 1 collection 'children' collection whose seriesNumber
* property corresponds to the value of the seriesNumber argument
*/
public Book findBookNumberInSaga(String sagaName, Integer bookNumber){
        
        // the query object
        Criteria findSeriesCriteria = Criteria.where("title").is(title);
        // the field object
        Criteria findSagaNumberCriteria = Criteria.where("books").elemMatch(Criteria.where("seriesNumber").is(seriesNumber));
        BasicQuery query = new BasicQuery(findSeriesCriteria.getCriteriaObject(), findSagaNumberCriteria.getCriteriaObject());

        return  mongoOperations.find(query, Book.class);

}

 // omitted mongo template initialization




As you can see this time I didn't use the org.springframework.data.mongodb.core.query.Query class to build my query but instead I used the org.springframework.data.mongodb.core.query.BasicQuery class; because as I stated before you can only do projection in the field object by using these class

You will notice that the syntax for this class is a bit different as it takes 2 DbObjects (which are basically HashMaps) 1 for the query object and one for the field object. Much like the the mongo shell client syntax :

db.collection.find( ,  )

So now our method is implemented we can finally call it and check the results :


public class WithElememMatch{

public static void main(String[] args){

    // get the parent book
     Book parentBook = findBookNumberInSaga("A song of ice and Fire", 4);
      Book numberFor;
   
   // null checks
     if(book != null && CollectionUtils.isNotEmpty(book.getBooks()){
  // get the only book we are interested in
    numberFor = parentBook.getBooks().get(0);
  
  }
     
}
//...
 
}

So there you go hope this was useful to you, as usual you can find the code of this tutorial over at my github account here

Note : for this tutorial you will only find the 2 java classes mentioned earlier, there's no Spring / Spring data configuration (that's for another tutorial)

OSX show used ports or listening applications with their PID

On OSX you can display applications listening on a given port using the lsof the commands described below will show listening application...