Showing posts with label Apache. Show all posts
Showing posts with label Apache. Show all posts

Tuesday, September 23, 2014

Jmeter plugins for Apache Jmeter

I came across Jmeter-plugins project today and tried it out. It has a nice set of addons to support/complement existing functionality of Jmeter.

Monday, March 10, 2014

Fixing BSFException: unable to load language: java

I was using JMeter to execute BeanShell scripts that I have written and came across this exception. I had to waste sometime to find out what was the exact issue. Although I copied the BSF jar to the JMeter lib directory, it was not sufficient. When I added the bsh-bsf-2.0b4.jar, the script started running successfully. 

I thought someone else might find this tip useful, therefore blogging it.

Thursday, March 6, 2014

Fixing java.lang.ClassNotFoundException: org.apache.bsf.engines.java.JavaEngine

When using JMeter’s Java request sampler, I started seeing the below error.
2014/03/06 09:46:39 ERROR - org.apache.bsf.BSFManager: Exception : java.lang.ClassNotFoundException: org.apache.bsf.engines.java.JavaEngine
 at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
 at org.apache.bsf.BSFManager.loadScriptingEngine(BSFManager.java:693)
 at org.apache.jmeter.util.BSFTestElement.processFileOrScript(BSFTestElement.java:98)
 at org.apache.jmeter.visualizers.BSFListener.sampleOccurred(BSFListener.java:51)
 at org.apache.jmeter.threads.ListenerNotifier.notifyListeners(ListenerNotifier.java:84)
 at org.apache.jmeter.threads.JMeterThread.notifyListeners(JMeterThread.java:783)
 at org.apache.jmeter.threads.JMeterThread.process_sampler(JMeterThread.java:443)
        at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:257)

Following steps will resolve this issue.

1. Remove existing bsf jar from the jmeter/lib directory
heshans@15mbp-08077:~/Dev/tools$ rm apache-jmeter-2.11/lib/bsf-2.4.0.jar

2. Download and extract BSF http://wi.wu-wien.ac.at/rgf/rexx/bsf4rexx/current/BSF4Rexx_install.zip.

3. Copy the following two jars to the jmeter/lib directory.
heshans@15mbp-08077:~/Dev/tools$ cp bsf4rexx/bsf-rexx-engine.jar apache-jmeter-2.11/lib/ 
heshans@15mbp-08077:~/Dev/tools$ cp bsf4rexx/bsf- apache-jmeter-2.11/lib/

Thursday, February 27, 2014

Fixing java.lang.NoClassDefFoundError: org/codehaus/classworlds/Launcher

I came across this error after I installed a new version of maven.
$ mvn -version
java.lang.NoClassDefFoundError: org/codehaus/classworlds/Launcher
Caused by: java.lang.ClassNotFoundException: org.codehaus.classworlds.Launcher
        at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
Could not find the main class: org.codehaus.classworlds.Launcher.  Program will exit.
Exception in thread "main"
The issue was with having two MAVEN_HOME locations (two versions) in the "path" variable. Once I removed a one, the issue resolved.
$ mvn -version
Apache Maven 3.1.1

Wednesday, April 17, 2013

Apache Airavata 0.7 Released

The Apache Airavata PMC is pleased to announce the immediate availability of the Airavata 0.7 release.

The release can be obtained from the Apache Airavata download page - http://airavata.apache.org/about/downloads.html


Apache Airavata is a software framework providing API’s, sophisticated server-side tools, and graphical user interfaces to construct, execute, control and manage long running applications and workflows on distributed computing resources. Apache Airavata builds on general concepts of service oriented computing, distributed messaging, and workflow composition and orchestration.

For general information on Apache Airavata, please visit the project website: http://airavata.apache.org/

Friday, April 5, 2013

Run EC2 Jobs with Airavata - Part III

This is a followup to my earlier posts [1] [2]. Here we will execute the application mentioned in [2] programmetically using Airavata.

import org.apache.airavata.commons.gfac.type.*;
import org.apache.airavata.gfac.GFacAPI;
import org.apache.airavata.gfac.GFacConfiguration;
import org.apache.airavata.gfac.GFacException;
import org.apache.airavata.gfac.context.security.AmazonSecurityContext;
import org.apache.airavata.gfac.context.ApplicationContext;
import org.apache.airavata.gfac.context.JobExecutionContext;
import org.apache.airavata.gfac.context.MessageContext;
import org.apache.airavata.schemas.gfac.*;
import org.junit.Assert;
import org.junit.Before;
import org.junit.Test;

import java.io.File;
import java.net.URL;
import java.util.ArrayList;
import java.util.List;

/**
 * Your Amazon instance should be in a running state before running this test.
 */
public class EC2ProviderTest {
    private JobExecutionContext jobExecutionContext;

    private static final String hostName = "ec2-host";

    private static final String hostAddress = "ec2-address";

    private static final String sequence1 = "RR042383.21413#CTGGCACGGAGTTAGCCGATCCTTATTCATAAAGTACATGCAAACGGGTATCCATA" +
            "CTCGACTTTATTCCTTTATAAAAGAAGTTTACAACCCATAGGGCAGTCATCCTTCACGCTACTTGGCTGGTTCAGGCCTGCGCCCATTGACCAATATTCCTCA" +
            "CTGCTGCCTCCCGTAGGAGTTTGGACCGTGTCTCAGTTCCAATGTGGGGGACCTTCCTCTCAGAACCCCTATCCATCGAAGACTAGGTGGGCCGTTACCCCGC" +
            "CTACTATCTAATGGAACGCATCCCCATCGTCTACCGGAATACCTTTAATCATGTGAACATGCGGACTCATGATGCCATCTTGTATTAATCTTCCTTTCAGAAG" +
            "GCTGTCCAAGAGTAGACGGCAGGTTGGATACGTGTTACTCACCGTGCCGCCGGTCGCCATCAGTCTTAGCAAGCTAAGACCATGCTGCCCCTGACTTGCATGT" +
            "GTTAAGCCTGTAGCTTAGCGTTC";

    private static final String sequence2 = "RR042383.31934#CTGGCACGGAGTTAGCCGATCCTTATTCATAAAGTACATGCAAACGGGTATCCATA" +
            "CCCGACTTTATTCCTTTATAAAAGAAGTTTACAACCCATAGGGCAGTCATCCTTCACGCTACTTGGCTGGTTCAGGCTCTCGCCCATTGACCAATATTCCTCA" +
            "CTGCTGCCTCCCGTAGGAGTTTGGACCGTGTCTCAGTTCCAATGTGGGGGACCTTCCTCTCAGAACCCCTATCCATCGAAGACTAGGTGGGCCGTTACCCCGC" +
            "CTACTATCTAATGGAACGCATCCCCATCGTCTACCGGAATACCTTTAATCATGTGAACATGCGGACTCATGATGCCATCTTGTATTAAATCTTCCTTTCAGAA" +
            "GGCTATCCAAGAGTAGACGGCAGGTTGGATACGTGTTACTCACCGTGCG";

    /* Following variables are needed to be set in-order to run the test. Since these are account specific information,
       I'm not adding the values here. It's the responsibility of the person who's running the test to update
       these variables accordingly.
       */

    /* Username used to log into your ec2 instance eg.ec2-user */
    private String userName = "";

    /* Secret key used to connect to the image */
    private String secretKey = "";

    /* Access key used to connect to the image */
    private String accessKey = "";

    /* Instance id of the running instance of your image */
    private String instanceId = "";

    @Before
    public void setUp() throws Exception {
        URL resource = GramProviderTest.class.getClassLoader().getResource("gfac-config.xml");
        assert resource != null;
        System.out.println(resource.getFile());
        GFacConfiguration gFacConfiguration = GFacConfiguration.create(new File(resource.getPath()), null, null);

        /* EC2 Host */
        HostDescription host = new HostDescription(Ec2HostType.type);
        host.getType().setHostName(hostName);
        host.getType().setHostAddress(hostAddress);

        /* App */
        ApplicationDescription ec2Desc = new ApplicationDescription(Ec2ApplicationDeploymentType.type);
        Ec2ApplicationDeploymentType ec2App = (Ec2ApplicationDeploymentType)ec2Desc.getType();

        String serviceName = "Gnome_distance_calculation_workflow";
        ec2Desc.getType().addNewApplicationName().setStringValue(serviceName);
        ec2App.setJobType(JobTypeType.EC_2);
        ec2App.setExecutable("/home/ec2-user/run.sh");
        ec2App.setExecutableType("sh");

        /* Service */
        ServiceDescription serv = new ServiceDescription();
        serv.getType().setName("GenomeEC2");

        List inputList = new ArrayList();

        InputParameterType input1 = InputParameterType.Factory.newInstance();
        input1.setParameterName("genome_input1");
        input1.setParameterType(StringParameterType.Factory.newInstance());
        inputList.add(input1);

        InputParameterType input2 = InputParameterType.Factory.newInstance();
        input2.setParameterName("genome_input2");
        input2.setParameterType(StringParameterType.Factory.newInstance());
        inputList.add(input2);

        InputParameterType[] inputParamList = inputList.toArray(new InputParameterType[inputList.size()]);

        List outputList = new ArrayList();
        OutputParameterType output = OutputParameterType.Factory.newInstance();
        output.setParameterName("genome_output");
        output.setParameterType(StringParameterType.Factory.newInstance());
        outputList.add(output);

        OutputParameterType[] outputParamList = outputList
                .toArray(new OutputParameterType[outputList.size()]);

        serv.getType().setInputParametersArray(inputParamList);
        serv.getType().setOutputParametersArray(outputParamList);

        jobExecutionContext = new JobExecutionContext(gFacConfiguration,serv.getType().getName());
        ApplicationContext applicationContext = new ApplicationContext();
        jobExecutionContext.setApplicationContext(applicationContext);
        applicationContext.setServiceDescription(serv);
        applicationContext.setApplicationDeploymentDescription(ec2Desc);
        applicationContext.setHostDescription(host);

        AmazonSecurityContext amazonSecurityContext =
                new AmazonSecurityContext(userName, accessKey, secretKey, instanceId);
        jobExecutionContext.addSecurityContext(AmazonSecurityContext.AMAZON_SECURITY_CONTEXT, amazonSecurityContext);

        MessageContext inMessage = new MessageContext();
        ActualParameter genomeInput1 = new ActualParameter();
        ((StringParameterType)genomeInput1.getType()).setValue(sequence1);
        inMessage.addParameter("genome_input1", genomeInput1);

        ActualParameter genomeInput2 = new ActualParameter();
        ((StringParameterType)genomeInput2.getType()).setValue(sequence2);
        inMessage.addParameter("genome_input2", genomeInput2);

        MessageContext outMessage = new MessageContext();
        ActualParameter echo_out = new ActualParameter();
        outMessage.addParameter("distance", echo_out);

        jobExecutionContext.setInMessageContext(inMessage);
        jobExecutionContext.setOutMessageContext(outMessage);
    }

    @Test
    public void testGramProvider() throws GFacException {
        GFacAPI gFacAPI = new GFacAPI();
        gFacAPI.submitJob(jobExecutionContext);
        MessageContext outMessageContext = jobExecutionContext.getOutMessageContext();
        Assert.assertEquals(MappingFactory.
                toString((ActualParameter) outMessageContext.getParameter("genome_output")), "476");
    }
}

References
[1] - http://heshans.blogspot.com/2013/04/run-ec2-jobs-with-airavata-part-i.html
[2] - http://heshans.blogspot.com/2013/04/run-ec2-jobs-with-airavata-part-ii.html 

Run EC2 Jobs with Airavata - Part II

In this post we will look at how to compose a workflow out of an application that is installed in an Amazon Machine Image (AMI). In the earlier post we discussed how to do ec2 instance management using XBaya GUI. This is the followup to that post.

For the Airavata EC2 integration testing, I created an AMI which has an application which does gene sequence alignment using Smith-Waterman algorithm. I will be using that application as a reference to this post. You can use an application of your preference that resides in your AMI.

1. Unzip Airavata server distribution and start the server.
unzip apache-airavata-server-0.7-bin.zip
cd apache-airavata-server-0.7/bin
./airavata-server.sh

2. Unzip Airavata XBaya distribution and start XBaya.
unzip apache-airavata-xbaya-gui-0.7-bin.zip
cd apache-airavata-xbaya-gui-0.7/bin
./xbaya-gui.sh

Then you'll get the XBaya UI.


3. Select "XBaya" Menu and click "Add Host" to register an EC2 Host. Once you add the details, click   "ok".


4. You will then be prompted to enter "Airavata Registry" information. If you are using the default setup, you don't have to do any configuration. Just click "ok".


5. In order to use your application installed in the AMI, you must register it as an application in Airavata system. Select "XBaya" menu and click "Register Application". You will get the following dialog. Add the input parameters expected and the output parameters generated by your application.


6. Then Click the "New deployment" button. You have to then select the EC2Host that you registered earlier as the Application Host. Configure the executable path to your application in your AMI and click "Add".


7. Then click "Register". If the application registration was successful, you will be getting the following message.


8. Now select "Registry" menu and click "Setup Airavata Registry". Click "ok".


9. Select "XBaya" menu and click "New workflow". Then configure it accordingly.


10. Select your registered application from the "Application Services" and drag drop it to the workflow window.


11. Drag an "Instance" component from "Amazon Components" and drop it into workflow window. Then connect it to your application using Control ports.


12. Click on top of the "Instance" components config label. Configure your instance accordingly.


13. Drag and drop two input components and one output component to the workflow from "System Components".


14. Connect the components together accordingly.


15. Now click the red colored "play" button to run your workflow. You will be prompted for the inputs   values (in my case the gene sequences) and experiment id. Then click "Run" to execute your workflow.


16. The execution result will be shown in the XBaya GUI.


References
[1] - http://heshans.blogspot.com/2013/04/run-ec2-jobs-with-airavata-part-i.html

Run EC2 Jobs with Airavata - Part I

This will be the first of  many posts that I will be doing on Apache Airavata EC2 integration. First let's have a look at how you can use Airavata's "XBaya GUI" to manage amazon instances.

Applies to : Airavata 0.7 and above

1. Unzip Airavata server distribution and start the server.
unzip apache-airavata-server-0.7-bin.zip
cd apache-airavata-server-0.7/bin
./airavata-server.sh
2. Unzip Airavata XBaya distribution and start XBaya.
unzip apache-airavata-xbaya-gui-0.7-bin.zip
cd apache-airavata-xbaya-gui-0.7/bin
./xbaya-gui.sh
Then you'll get the XBaya UI.


3. Then Select "Amazon" menu and click "Security Credentials". Specify your secret key and access key in the security credentials dialog box and click "ok".


4. Then Select "Amazon" menu and click "EC2 Instance Management". It will give a glimpse of your running instances.


5. Click the "launch" button to launch new instances and "terminate" button to terminate, running instances.


6. When you launch a new instance, it will be showed in your "Amazon EC2 Management Console".



Friday, March 15, 2013

Airavata Deployment Studio (ADS)


This is an independent study that I have been doing for Apache Airavata [1]. Airavata Deployment Studio or simply ADS, is a platform where an Airavata user can deploy his/her Airavata deployment on a Cloud computing resource on demand. Now let's dive into ADS and what's the actual problem that we are trying the solve here. 


What is Airavata? 


Airavata is a framework which enables a user to build Science Gateways. It is used to compose, manage, execute and monitor distributed applications and workflows on computational resources. These computational resources can range from local resources to computational grids and clouds. Therefore, various users with different backgrounds either contribute or use Airavata in their applications.



Who uses Airavata? 

From the Airavata standpoint, three main users can be identified.


1) End Users


End User is the one who will have a model code to do some scientific application. Sometimes this End User can be a Research Scientist. He/She writes scripts to wrap the applications up and by executing those scripts, they run the scientific workflows in Super Computers. This can be called a scientific experiment.

2) Gateway Developers


The Research Scientist is the one who comes up with requirement of bundling scientific applications together and composing as a workflow. The job of the Gateway Developer is to use Airavata and wrap the above mentioned model code and scripts together. Then, scientific workflows are created out these. In some cases, Scientist might be the Gateway Developer as well.

3) Core Developers


Core Developer is the one who develops and contributes to Airavata framework code-base. The Gateway Developers use the software developed by the Core Developers to create science gateways.

Why ADS?

According to the above description, Airavata is used by different people with different technical backgrounds. Some people will have in depth technical knowledge on their scientific domains; like chemistry, biology, astronomy, etc and may not have in depth knowledge on computer science aspects such as cluster configuration, configuring and trouble-shooting in VMs, etc. 

When it comes to ADS, it's targeted towards the first two types of users as they will be ones who will be running in to configuration issues with Airavata in their respective systems. 

Sometimes we come across instances where a user might run into issues while setting up Airavata on their Systems. These might be attributed to; 
  1. User not following the documented steps properly.
  2. Issues in setting up the user environment. 
  3. User not being able to diagnose the issues at their end on their own.
  4. Sometimes when we try to diagnose their issue remotely, we face difficulties trying to access user's VM remotely due to security policies defined in their System. 
  5. Different security policies at client's firewall.

Due to the above mentioned issues, a first time user might go away with a bad impression due to a System/VM level issue that might not be directly related to Airavata. 

What we are trying to do here is to give a first time user a good first impression as well as ease of configuring the Airavata eco system for production usage. 

How? 

Now you might be wondering how does ADS achieve this? ADS will use FutureGrid [3] as the underlying resource platform for this application. If you are interested in learning about what FutureGrid is, please refer [3] for more information. ADS will ultimately become a plugin to the FutureGrid's CloudMesh [4] environment.

ADS will provide a user with a web interface which a user can use to configure his/her Airavata eco system. Once the configuration options are selected and user hits the submit button, a new VM with the selected configurations will be created. The user will be able to create his/her image with the following properties. 
  • Infrastructure - eg: OpenStack, Eucalyptus, EC2, etc
  • Architecture - eg: 64-bit, 32-bit 
  • Memory - eg: 2GB, 4GB, 8GB, etc
  • Operating System - eg: Ubuntu, CentOS, Fedora, etc
  • Java version - eg: Java 1.6, Java 1.7
  • Tomcat Version - eg: Tomcat6, Tomcat7
  • Airavata Version - eg: Airavata-0.6, Airavata-0.7

Advantages?

  1. One click install. 
  2. No need to interact with the shell to configure an Airavata environment.
  3. Deploying on various Cloud platforms based on user preference.
  4. Ease of use. 
  5. First time user will be able to quickly configure an insatnce of his own and run a sample workflow quickly. 
  6. On demand aspect.

Sneak Peak

Following screenshots show how ADS will look like.









References 


Friday, October 5, 2012

Run a workflow on Ranger using Airavata


This post will show you how to run a workflow on Ranger [1] using Apache Airavata.

Applies to : Airavata 0.5-SNAPSHOT


Checkout and build Airavata

1. Checkout the source from the svn trunk
svn co https://svn.apache.org/repos/asf/airavata/trunk
2. Move to the root directory and build the trunk using maven.

eg: build with tests
mvn clean install
eg: build skipping tests
mvn clean install -Dmaven.test.skip=true -o
3. Unzip the apache-airavata-0.5-SNAPSHOT-bin.zip pack. The unzipped directory will be called AIRAVATA_HOME hereafter.

How to start the Airavata System Components with MySQL

Airavata is running using Apache Derby as the backend database. You can use MySQL database instead of default derby database if you wish.

1. Create a new user called airavata with password airavata in running MySQL instance

mysql> CREATE USER 'airavata'@'localhost' IDENTIFIED BY 'airavata';
2. Create a mysql database in your server with required name for user airavata (Assume name is:persistent_data)

mysql> CREATE DATABASE persistent_data;

GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,DROP
ON persistent_data.*
TO 'airavata'@'localhost'
IDENTIFIED BY 'airavata';
3. Copy mysql driver jar (-5.1.6.jar) to /standalone-server/lib

4. Edit jdbc connection URLs at repository.properties file that resides in /standalone-server/conf. You can achieve this by just uncommenting out parameters under "#configuration for mysql" section and commented-out the default derby configurations (If you followed the above instructions you might not have to change this repository.properties file.)


registry.jdbc.url=jdbc:mysql://localhost:3306/persistent_data
registry.jdbc.driver=com.mysql.jdbc.Driver
5. In the next section we will try to run a workflow in Ranger [1]. Configure the repository.properties file by modifying dummy values for following properties. The XSEDE cert files can be downloaded from [2].

trusted.cert.location=path to certificates for ranger
myproxy.server=myproxy.teragrid.org
myproxy.user=username
myproxy.pass=password
myproxy.life=3600
5. Go to /bin and start the script airavata-server.sh. Now Airavata server will start with MySQL database (in this case persistent_data database).

6. When starting XBaya, it should also point to the same database that we specified when starting Airavata server. In order to do that, copy the same repository.properties file that you have edited in the 2nd step, to /bin folder. Now you are ready to start XBaya. To start XBaya, go to /bin folder and run the script xbaya-gui.sh. Now XBaya will start with the same database that used to start Airavata server.

Registering An application which will run on Ranger

1. Click on XBaya --> Register Application.  Then fill in the following values as it is shown in the diagram.



2. Click New Deployment button. Following dialog box will then appear. Select Ranger as the host from the drop down menu. Then fill in the application that you are going to invoke (in this case /bin/echo) and set the proper scratch directory. The scratch directory can be identified by running the following command on Ranger once you are logged into it through your shell.

echo $SCRATCH


3. Then click "HPC Configuration" button. Select job type as serial. Specify the project account assigned to you. Finally is specify the queue as development as your job will be submitted to this queue. You can configure the other fields as well; but if you don't set them, default values will be added for them. Then click "update" button.


4. Click "Add" button.


5. Click "Register" button.

6. If the registration was successful, following message will appear.


7. Click XBaya Menu and select "New Workflow".


8. Give a name for your workflow and hit "Ok".


9. Your application will be listed under "Application Services" in the left menu.


10. Drag it and drop on to the workflow window.


11. Select "input" and "output" components from Component list. Drag them to the workflow window. Then connect the dots using the mouse pointer as it is shown in the below diagram.


12. Then hit the red colored "play" button on the top left corner. Following dialog will appear. Give an input for your workflow and a experiment name for the workflow run. Then hit "Run". Now your job will be launched on Ranger.



[1] - http://www.tacc.utexas.edu/user-services/user-guides/ranger-user-guide
[2] - https://software.xsede.org/security/xsede-certs.tar.gz
[3] - http://airavata.apache.org/documentation/system/airavata-in-10-minutes.html

How to start the Airavata with MySQL backend

Airavata is running using Apache Derby as the backend database by default. You can use MySQL database instead of default derby database. Following post describe $subject.

1. Create a new user called airavata with password airavata in running MySQL instance.

mysql> CREATE USER 'airavata'@'localhost' IDENTIFIED BY 'airavata';

2. Create a mysql database in your server with required name for user airavata (Assume name is:persistent_data)

mysql> CREATE DATABASE persistent_data;

GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,DROP

ON persistent_data.*
TO 'airavata'@'localhost'
IDENTIFIED BY 'airavata';

3. Copy mysql driver jar (-5.1.6.jar) to AIRAVATA_HOME/standalone-server/lib

4. Edit jdbc connection URLs at repository.properties file that resides in AIRAVATA_HOME/standalone-server/conf. You can achieve this by just uncommenting out parameters under "#configuration for mysql" section and commented-out the default derby configurations (If you followed the above instructions you might not have to change this repository.properties file.)

registry.jdbc.url=jdbc:mysql://localhost:3306/persistent_data
registry.jdbc.driver=com.mysql.jdbc.Driver

5. Go to AIRAVATA_HOME/bin and start the script airavata-server.sh. Now Airavata server will start with MySQL database (in this case persistent_data database).

6. When starting XBaya, it should also point to the same database that we specified when starting Airavata server. In order to do that, copy the same repository.properties file that you have edited in the 2nd step, to AIRAVATA_HOME/bin folder. Now you are ready to start XBaya. To start XBaya, go to AIRAVATA_HOME/bin folder and run the script xbaya-gui.sh. Now XBaya will start with the same database that used to start Airavata server.

Monday, September 10, 2012

Configure Apache Rave for SSL

I had to do $subject for the oa4mp integration work that I am currently doing with Rave. I had to do some configuration changes to get SSL working with Rave. Following are the instructions on how to $subject.

Enabling SSL in Tomcat

Following instructions demonstrate how to get Tomcat 6 running over SSL using a self signed certificate.
  • Find the reverse DNS (of the IP address )of the machine in which you are going to install.
$ host your-ip-address
  •  Then you'll be getting the reverse DNS of the IP address you gave.
 xxx-yy-zzz-hhh.dhcp-bl.xxx.edu
  • Generate a self signed certificate that you'll use with Tomcat.
keytool -genkey -alias tomcat -keyalg RSA -validity 365 -storepass changeit -keystore $JAVA_HOME/jre/lib/security/cacerts

What is your first and last name?
  [Unknown]:  xxx-yy-zzz-hhh.dhcp-bl.xxx.edu
What is the name of your organizational unit?
  [Unknown]:  SGG
What is the name of your organization?
  [Unknown]:  IU
What is the name of your City or Locality?
  [Unknown]:  Bloomington
What is the name of your State or Province?
  [Unknown]:  IN
What is the two-letter country code for this unit?
  [Unknown]:  US
Is CN=xxx-yy-zzz-hhh.dhcp-bl.xxx.edu, OU=SGG, O=IU, L=Bloomington, ST=IN, C=US correct?
  [no]:  yes

Enter key password for
        (RETURN if same as keystore password):
  • Edit Tomcats server.xml to enable an SSL listener on port 443 using our alternate cacerts file. By default Tomcat looks for a certificate with the alias "tomcat" which is what we used to create our self signed certificate. (uncommented the HTTPS connector and configured it to use our custom cacerts file)

<Connector port="443" protocol="HTTP/1.1" SSLEnabled="true"
           maxThreads="150" scheme="https" secure="true"
           keystoreFile="$JAVA_HOME/jre/lib/security/cacerts" keystorePass="changeit"
           clientAuth="false" sslProtocol="TLS" />

Configure Apache Rave and Shindig to run over SSL.

1. Configure properties files.
  • Edit the portal.properties file to configure Apache Rave to use SSL. (updated the following values at the top of the portal.properties config file with)
portal.opensocial_engine.protocol=https
portal.opensocial_engine.root=xxx-yy-zzz-hhh.dhcp-bl.xxx.edu
portal.opensocial_engine.gadget_path=/gadgets
Edit the rave.shindig.properties and  container.js files to configure Shindig to use SSL.
  • The changes to container.js are - search and replace of http:// with https://
  • Updated the following values at the top of the rave.shindig.properties config file with.
shindig.host= xxx-yy-zzz-hhh.dhcp-bl.xxx.edu
shindig.port=
shindig.contextroot=

2. Update the rave-portal pom.
  • Add the following configuration to the cargo plugin. It uses the tomcat server.xml file (configured in the first section) give in the configuration to startup a Tomcat instance.
<configfiles>
    <configfile>
        <file>${project.basedir}/../rave-portal-resources/src/main/dist/conf/tomcat-users.xml</file>
        <todir>conf/</todir>
        <tofile>tomcat-users.xml</tofile>
    </configfile>
    <configfile>
        <file>/home/heshan/Dev/airavata-rave-integration/oauth/rave-0.15-oa4mp-branch/config/server.xml</file>
        <todir>conf/</todir>
        <tofile>server.xml</tofile>
    </configfile>
</configfiles>
  • Build raven project.
mvn clean install
  • Move to the rave-portal module and start Rave using the Cargo plugin.
cd rave-portal
mvn cargo:start
  • Log into the portal using the login page. 
https://156-56-179-232.dhcp-bl.indiana.edu/portal/login

Friday, August 3, 2012

Apache Airavata 0.4-INCUBATING Released


The Apache Airavata (Incubating) team is pleased to announce the immediate availability of the Airavata 0.4-INCUBATING release.

The release can be obtained from the Apache Airavata download page - http://incubator.apache.org/airavata/about/downloads.html


Apache Airavata is a software toolkit currently used to build science gateways but that has a much wider potential use. It provides features to compose, manage, execute, and monitor small to large scale applications and workflows on computational resources ranging from local clusters to national grids and computing clouds. Gadgets interfaces to Airavata back end services can be deployed in open social containers such as Apache Rave and modify them to suit their needs. Airavata builds on general concepts of service oriented computing, distributed messaging, and workflow composition and orchestration.

For general information on Apache Airavata, please visit the project website:

Disclaimer:
 Apache Airavata is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by the Apache Incubator. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects.  While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF.

Friday, July 6, 2012

How to Submit Patches to Apache Airavata


This post describes how an Airavata user can contribute to the Airavata project by submitting patches. User can follow the steps given below.
  • Identify an issue that you want to fix or improve
  • Search JIRA and the mailing list to see if it’s already been discussed
  • If it’s a bug or a feature request, open a JIRA issue
  • Create a sample that you can use for prototyping the feature or demonstrating the bug. If creating a sample is time consuming, write steps to reproduce the issue.
  • Attach this sample to the JIRA issue if it’s representing a bug report.
  • Setup a svn client in your system.
  • Checkout the source code.
  • Make your changes
  • Create the patch:
    • svn add any_files_you_added
    • svn diff > /tmp/fix-AIRAVATA-NNNN.patch
  • Attach that file (/tmp/fix-AIRAVATA-NNNN.patch) to the JIRA

Thursday, July 5, 2012

Deploying Airavata Server on Tomcat


A shell script named setup_tomcat.sh is shipped with Airavata that will assist you to $subject. Following steps describe how you can do it.

1) Update tomcat.hostname and tomcat.port properties of the airavata-tomcat.properties file. You can keep the defaults if you dont want to change ports. In that case you don't have to edit the airavata-tomcat.properties file. This file can be found in AIRAVATA_HOME/tools/airavata-tomcat.properties.

2) Download following to your local file system.
a) apache-tomcat-7.0.28.zip
b) apache-airavata-0.4-incubating-SNAPSHOT-bin.zip
b) axis2-1.5.1-war.zip (Unzip it. When running the script point to the axis2.war)
d) ackrabbit-webapp-2.4.0.war

3) Run the script (setup_tomcat.sh) by providing the full file paths of the files you downloaded. This script can be found in AIRAVATA_HOME/tools/ directory.
./setup_tomcat.sh --tomcat=/home/heshan/Dev/setup/gw8/apache-tomcat-7.0.28.zip --airavata=/home/heshan/Dev/setup/gw8/apache-airavata-0.4-incubating-SNAPSHOT-bin.zip --axis2=/home/heshan/Dev/setup/gw8/axis2.war --jackrabbit=/home/heshan/Dev/setup/gw8/jackrabbit-webapp-2.4.0.war --properties=/home/heshan/Dev/setup/gw8/airavata-tomcat.properties

4) Start Tomcat server.
eg: ./catalina.sh start

5) Before using airavata go to http://localhost:8090/jackrabbit-webapp-2.4.0 and create a default content repository.

6) Restart Tomcat server.

Wednesday, July 4, 2012

Registering Application Descriptors using Airavata Client API

Following post demonstrates how to programmetically register; 1. Host 2. Application 3. Service descriptors using Apache Airavata Client API

import org.apache.airavata.common.registry.api.exception.RegistryException;
import org.apache.airavata.commons.gfac.type.ApplicationDeploymentDescription;
import org.apache.airavata.commons.gfac.type.HostDescription;
import org.apache.airavata.commons.gfac.type.ServiceDescription;
import org.apache.airavata.migrator.registry.MigrationUtil;
import org.apache.airavata.registry.api.AiravataRegistry;
import org.apache.airavata.schemas.gfac.*;

import java.net.MalformedURLException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

public class DescriptorRegistrationSample {

    public static void main(String[] args) {
        Map<String, String> config = new HashMap<String, String>();
        config.put(org.apache.airavata.client.airavata.AiravataClient.MSGBOX,"http://localhost:8090/axis2/services/MsgBoxService");
        config.put(org.apache.airavata.client.airavata.AiravataClient.BROKER, "http://localhost:8090/axis2/services/EventingService");
        config.put(org.apache.airavata.client.airavata.AiravataClient.WORKFLOWSERVICEURL, "http://localhost:8090/axis2/services/WorkflowInterpretor?wsdl");
        config.put(org.apache.airavata.client.airavata.AiravataClient.JCR, "http://localhost:8090/jackrabbit-webapp-2.4.0/rmi");
        config.put(org.apache.airavata.client.airavata.AiravataClient.JCR_USERNAME, "admin");
        config.put(org.apache.airavata.client.airavata.AiravataClient.JCR_PASSWORD, "admin");
        config.put(org.apache.airavata.client.airavata.AiravataClient.GFAC, "http://localhost:8090/axis2/services/GFacService");
        config.put(org.apache.airavata.client.airavata.AiravataClient.WITHLISTENER, "false");
        config.put(org.apache.airavata.client.airavata.AiravataClient.TRUSTED_CERT_LOCATION, "/Users/Downloads/certificates");

        org.apache.airavata.client.airavata.AiravataClient airavataClient = null;
        try {
            airavataClient = new org.apache.airavata.client.airavata.AiravataClient(config);
        } catch (MalformedURLException e) {
            e.printStackTrace();
        }

        // Create Host Description
        HostDescription host = new HostDescription();
        host.getType().changeType(GlobusHostType.type);
        host.getType().setHostName("gram");
        host.getType().setHostAddress("gatekeeper2.ranger.tacc.teragrid.org");
        ((GlobusHostType) host.getType()).
                setGridFTPEndPointArray(new String[]{"gsiftp://gridftp.ranger.tacc.teragrid.org:2811/"});
        ((GlobusHostType) host.getType()).
                setGlobusGateKeeperEndPointArray(new String[]{"gatekeeper.ranger.tacc.teragrid.org:2119/jobmanager-sge"});


        // Create Application Description 
        ApplicationDeploymentDescription appDesc = new ApplicationDeploymentDescription(GramApplicationDeploymentType.type);
        GramApplicationDeploymentType app = (GramApplicationDeploymentType) appDesc.getType();
        app.setCpuCount(1);
        app.setNodeCount(1);
        ApplicationDeploymentDescriptionType.ApplicationName name = appDesc.getType().addNewApplicationName();
        name.setStringValue("EchoMPILocal");
        app.setExecutableLocation("/home/path_to_executable");
        app.setScratchWorkingDirectory("/home/path_to_temporary_directory");
        app.setCpuCount(16);
        app.setJobType(MigrationUtil.getJobTypeEnum("MPI"));
        //app.setMinMemory();
        ProjectAccountType projectAccountType = ((GramApplicationDeploymentType) appDesc.getType()).addNewProjectAccount();
        projectAccountType.setProjectAccountNumber("XXXXXXXX");

        // Create Service Description
        ServiceDescription serv = new ServiceDescription();
        serv.getType().setName("MockPwscfMPIService");

        InputParameterType input = InputParameterType.Factory.newInstance();
        input.setParameterName("echo_input_name");
        ParameterType parameterType = input.addNewParameterType();
        parameterType.setType(DataType.Enum.forString("String"));
        parameterType.setName("String");
        List<InputParameterType> inputList = new ArrayList<InputParameterType>();
        inputList.add(input);
        InputParameterType[] inputParamList = inputList.toArray(new InputParameterType[inputList
                .size()]);

        OutputParameterType output = OutputParameterType.Factory.newInstance();
        output.setParameterName("echo_mpi_output");
        ParameterType parameterType1 = output.addNewParameterType();
        parameterType1.setType(DataType.Enum.forString("String"));
        parameterType1.setName("String");
        List<OutputParameterType> outputList = new ArrayList<OutputParameterType>();
        outputList.add(output);
        OutputParameterType[] outputParamList = outputList
                .toArray(new OutputParameterType[outputList.size()]);
        serv.getType().setInputParametersArray(inputParamList);
        serv.getType().setOutputParametersArray(outputParamList);

        // Save to Registry
        if (airavataClient!=null) {
            System.out.println("Saving to Registry");
            AiravataRegistry jcrRegistry = airavataClient.getRegistry();
            try {
                jcrRegistry.saveHostDescription(host);
                jcrRegistry.saveServiceDescription(serv);
                jcrRegistry.saveDeploymentDescription(serv.getType().getName(), host.getType().getHostName(), appDesc);

                jcrRegistry.deployServiceOnHost(serv.getType().getName(), host.getType().getHostName());
            } catch (RegistryException e) {
                e.printStackTrace();
            }
        }

        System.out.println("DONE");
        
    }

}

Tuesday, July 3, 2012

Airavata Programming API


Apache Airavata's Programming API is the API which is exposed to the Gateway Developers. Gateway Developers can use this API to execute and monitor workflows. The API user should keep in mind that a user can not compose an Airavata workflow (.xwf) using the API. Inorder to do that a user can use the XBaya User Interface. Therefore, other than creation of the workflow; Client API supports all other workflow related operations.


The main motivation behind, having a Client API is that to expose the user to an API that will let him/her to a the persistent information stored in the Registry. The information persisted in the Registry can be;
  • Descriptors
  • Workflow information
  • Workflow provenance information
  • Airavata configuration

Following are the high level usecases which uses Airavata Client API.



Client API Usecases



  1. Registry Operations 
    • Retrieve registry information
    • Access registry information
    • Update registry information
    • Delete registry information
    • Search registry information
  2. Execute workflow
    • Run workflow
    • Set inputs
    • Set workflow node IDs 
  3. Monitoring
  4. Provenance
  5. User Management (This is not yet implemented. It's currently in our Road Map and this is added as a place holder.)
    • User roles
    • Administration



Client API Components



The Client API consists of 5 main components.
  1. Airavata API
    • It is an Aggregator API which contains all the base methods for Airavata API.
  2. Airavata Manager
    • This exposes config related information on Airavata. This currently contains Service URLs only.
  3. Application Manager
    • This will handle operations related to descriptors. Namely;
      1. Host description
      2. Service description
      3. Application description
  4. Execution Manager
    • This can be used to run and monitor workflows.
  5. Provenance Manger
    • This provides API to manage provenance related information. ie. Keeps track of inputs, outputs, etc related to a workflow.
  6. User Manger
    • User management related API is exposed through this. Currently, Airavata does not support User management but it is in Airavata roadmap.
  7. Workflow manager
    • Every operation related to workflows is exposed through this. ie:
      1. saving workflow
      2. deleting workflow
      3. retrieving workflow

Wednesday, June 27, 2012

Apache Airavata Stakeholders

Airavata Users
Airavata is a framework which enables a user to build Science Gateways. It is used to compose, manage, execute and monitor distributed applications and workflows on computational resources. These computational resources can range from local resources to computational grids and clouds. Therefore, various users with different backgrounds either contribute or use Airavata in their applications. From the Airavata standpoint, three main users can be identified.
  • Research Scientists (Gateway End Users)
  • Gateway Developers
  • Core Developers
Now let's focus on each user and how they fit into Airavata's big picture.

 

Gateway End Users

Gateway End Users
Gateway End User is the one who will have a model code to do some scientific application. Sometimes this End User can be a Research Scientist. He/She writes scripts to wrap the applications up and by executing those scripts, they run the scientific workflows in Super Computers. This can be called a scientific experiment. Now the Scientist might have a requirement to call multiple of these applications together and compose a workflow. That's where the Gateway Developer comes into the picture.

 

Gateway Developers

Gateway Developers
The Research Scientist is the one who comes up with requirement of bundling scienitific applications together and composing as a workflow.
The job of the Gateway Developer is to use Airavata and wrap the above mentioned model code and scripts together. Then, scientific workflows are created out these.
Above diagram depicts how Gateway Developer fits into the picture.

 

Core Developers

Core Developers

Core Deveoper is the one who develops and contributes to Airavata framework codebase. The Gateway Developers use the software developed by the Core Developers to create science gateways.

Thursday, June 21, 2012

Apache Airavata 0.3-INCUBATING Released


The Apache Airavata (Incubating) team is pleased to announce the immediate
availability of the Airavata 0.3-INCUBATING release.

The release can be obtained from the Apache Airavata download page - http://incubator.apache.org/airavata/about/downloads.html


Apache Airavata is a software toolkit currently used to build science gateways but that has a much wider potential use. It provides features to compose, manage, execute, and monitor small to large scale applications and workflows on computational resources ranging from local clusters to national grids and computing clouds. Gadgets interfaces to Airavata back end services can be deployed in open social containers such as Apache Rave and modify them to suit their needs. Airavata builds on general concepts of service oriented computing, distributed messaging, and workflow composition and orchestration.

For general information on Apache Airavata, please visit the project website:

Wednesday, June 20, 2012

Programmetically execute an Echo job on Ranger using Apache Airavata


In a earlier post [2] we looked at how to execute a Echo job in Ranger [1] using the XBaya GUI. This post describes how to run the same scenario using a java client. This java client does not use the AiravataClient API but it uses XML Beans generated from Schema to describe and run the MPI job programmetically. I will be writing a test client later, which will be using AiravataClient API.

1. Configure gram.properties file which will be used in the test case. (Let's assume it's named gram_ranger.properties)

# The myproxy server to retrieve the grid credentials
myproxy.server=myproxy.teragrid.org
# Example: XSEDE myproxy server
#myproxy.server=myproxy.teragrid.org
# The user name and password to fetch grid proxy
myproxy.username=username
myproxy.password=********
#Directory with Grid Certification Authority certificates and CRL's
# The certificates for XSEDE can be downloaded from http://software.xsede.org/security/xsede-certs.tar.gz
ca.certificates.directory=/home/heshan/Dev/setup/gram-provider/certificates
# On computational grids, an allocation is awarded with a charge number. On XSEDE, the numbers are typically of the format TG-DIS123456
allocation.charge.number=TG-STA110014S
# The scratch space with ample space to create temporary working directory on target compute cluster
scratch.working.directory=/scratch/01437/ogce/test
# Name, FQDN, and gram and gridftp end points of the remote compute cluster
host.commom.name=gram
host.fqdn.name=gatekeeper2.ranger.tacc.teragrid.org
gridftp.endpoint=gsiftp://gridftp.ranger.tacc.teragrid.org:2811/
gram.endpoints=gatekeeper.ranger.tacc.teragrid.org:2119/jobmanager-sge
defualt.queue=development

2. Using the above configured properties file (gram_ranger.properties) run the test case which will execute the simple MPI job on Ranger.



import org.apache.airavata.commons.gfac.type.ActualParameter;
import org.apache.airavata.commons.gfac.type.ApplicationDeploymentDescription;
import org.apache.airavata.commons.gfac.type.HostDescription;
import org.apache.airavata.commons.gfac.type.ServiceDescription;
import org.apache.airavata.core.gfac.context.invocation.impl.DefaultExecutionContext;
import org.apache.airavata.core.gfac.context.invocation.impl.DefaultInvocationContext;
import org.apache.airavata.core.gfac.context.message.impl.ParameterContextImpl;
import org.apache.airavata.core.gfac.context.security.impl.GSISecurityContext;
import org.apache.airavata.core.gfac.notification.impl.LoggingNotification;
import org.apache.airavata.core.gfac.services.impl.PropertiesBasedServiceImpl;
import org.apache.airavata.registry.api.impl.AiravataJCRRegistry;
import org.apache.airavata.schemas.gfac.*;
import org.junit.Assert;
import org.junit.Before;
import org.junit.Test;

import java.net.URL;
import java.util.*;

import static org.junit.Assert.fail;

public class GramProviderTest {

    public static final String MYPROXY = "myproxy";
    public static final String GRAM_PROPERTIES = "gram_ranger.properties";
    private AiravataJCRRegistry jcrRegistry = null;

    @Before
    public void setUp() throws Exception {
        /*
           * Create database
           */
        Map config = new HashMap();
            config.put("org.apache.jackrabbit.repository.home","target");

        jcrRegistry = new AiravataJCRRegistry(null,
                "org.apache.jackrabbit.core.RepositoryFactoryImpl", "admin",
                "admin", config);
    
        /*
           * Host
           */

        URL url = this.getClass().getClassLoader().getResource(GRAM_PROPERTIES);
        Properties properties = new Properties();
        properties.load(url.openStream());
        HostDescription host = new HostDescription();
        host.getType().changeType(GlobusHostType.type);
        host.getType().setHostName(properties.getProperty("host.commom.name"));
        host.getType().setHostAddress(properties.getProperty("host.fqdn.name"));
        ((GlobusHostType) host.getType()).setGridFTPEndPointArray(new String[]{properties.getProperty("gridftp.endpoint")});
        ((GlobusHostType) host.getType()).setGlobusGateKeeperEndPointArray(new String[]{properties.getProperty("gram.endpoints")});


        /*
        * App
        */
        ApplicationDeploymentDescription appDesc = new ApplicationDeploymentDescription(GramApplicationDeploymentType.type);
        GramApplicationDeploymentType app = (GramApplicationDeploymentType) appDesc.getType();
        app.setCpuCount(1);
        app.setNodeCount(1);
        ApplicationDeploymentDescriptionType.ApplicationName name = appDesc.getType().addNewApplicationName();
        name.setStringValue("EchoLocal");
        app.setExecutableLocation("/bin/echo");
        app.setScratchWorkingDirectory(properties.getProperty("scratch.working.directory"));
        app.setCpuCount(1);
        ProjectAccountType projectAccountType = ((GramApplicationDeploymentType) appDesc.getType()).addNewProjectAccount();
        projectAccountType.setProjectAccountNumber(properties.getProperty("allocation.charge.number"));
        QueueType queueType = app.addNewQueue();
        queueType.setQueueName(properties.getProperty("defualt.queue"));
        app.setMaxMemory(100);
        
        /*
           * Service
           */
        ServiceDescription serv = new ServiceDescription();
        serv.getType().setName("SimpleEcho");

        InputParameterType input = InputParameterType.Factory.newInstance();
        ParameterType parameterType = input.addNewParameterType();
        parameterType.setName("echo_input");
        List inputList = new ArrayList();
        inputList.add(input);
        InputParameterType[] inputParamList = inputList.toArray(new InputParameterType[inputList
                .size()]);

        OutputParameterType output = OutputParameterType.Factory.newInstance();
        ParameterType parameterType1 = output.addNewParameterType();
        parameterType1.setName("echo_output");
        List outputList = new ArrayList();
        outputList.add(output);
        OutputParameterType[] outputParamList = outputList
                .toArray(new OutputParameterType[outputList.size()]);
        serv.getType().setInputParametersArray(inputParamList);
        serv.getType().setOutputParametersArray(outputParamList);

        /*
           * Save to registry
           */
        jcrRegistry.saveHostDescription(host);
        jcrRegistry.saveDeploymentDescription(serv.getType().getName(), host.getType().getHostName(), appDesc);
        jcrRegistry.saveServiceDescription(serv);
        jcrRegistry.deployServiceOnHost(serv.getType().getName(), host.getType().getHostName());
    }

    @Test
    public void testExecute() {
        try {
            URL url = this.getClass().getClassLoader().getResource(GRAM_PROPERTIES);
            Properties properties = new Properties();
            properties.load(url.openStream());

            DefaultInvocationContext ct = new DefaultInvocationContext();
            DefaultExecutionContext ec = new DefaultExecutionContext();
            ec.addNotifiable(new LoggingNotification());
            ec.setRegistryService(jcrRegistry);
            ct.setExecutionContext(ec);


            GSISecurityContext gsiSecurityContext = new GSISecurityContext();
            gsiSecurityContext.setMyproxyServer(properties.getProperty("myproxy.server"));
            gsiSecurityContext.setMyproxyUserName(properties.getProperty("myproxy.username"));
            gsiSecurityContext.setMyproxyPasswd(properties.getProperty("myproxy.password"));
            gsiSecurityContext.setMyproxyLifetime(14400);
            gsiSecurityContext.setTrustedCertLoc(properties.getProperty("ca.certificates.directory"));

            ct.addSecurityContext(MYPROXY, gsiSecurityContext);

            ct.setServiceName("SimpleEcho");

            /*
            * Input
            */
            ParameterContextImpl input = new ParameterContextImpl();
            ActualParameter echo_input = new ActualParameter();
            ((StringParameterType) echo_input.getType()).setValue("echo_output=hello");
            input.add("echo_input", echo_input);

            /*
            * Output
            */
            ParameterContextImpl output = new ParameterContextImpl();
            ActualParameter echo_output = new ActualParameter();
            output.add("echo_output", echo_output);

            // parameter
            ct.setInput(input);
            ct.setOutput(output);

            PropertiesBasedServiceImpl service = new PropertiesBasedServiceImpl();
            service.init();
            service.execute(ct);

            Assert.assertNotNull(ct.getOutput());
            Assert.assertNotNull(ct.getOutput().getValue("echo_output"));
            Assert.assertEquals("hello", ((StringParameterType) ((ActualParameter) ct.getOutput().getValue("echo_output")).getType()).getValue());


        } catch (Exception e) {
            e.printStackTrace();
            fail("ERROR");
        }
    }
}
[1] - http://www.tacc.utexas.edu/user-services/user-guides/ranger-user-guide 
[2] - http://heshans.blogspot.com/2012/06/execute-echo-job-on-ranger-using-apache.html