Thursday, November 29, 2012

Friday, October 5, 2012

Run a workflow on Ranger using Airavata


This post will show you how to run a workflow on Ranger [1] using Apache Airavata.

Applies to : Airavata 0.5-SNAPSHOT


Checkout and build Airavata

1. Checkout the source from the svn trunk
svn co https://svn.apache.org/repos/asf/airavata/trunk
2. Move to the root directory and build the trunk using maven.

eg: build with tests
mvn clean install
eg: build skipping tests
mvn clean install -Dmaven.test.skip=true -o
3. Unzip the apache-airavata-0.5-SNAPSHOT-bin.zip pack. The unzipped directory will be called AIRAVATA_HOME hereafter.

How to start the Airavata System Components with MySQL

Airavata is running using Apache Derby as the backend database. You can use MySQL database instead of default derby database if you wish.

1. Create a new user called airavata with password airavata in running MySQL instance

mysql> CREATE USER 'airavata'@'localhost' IDENTIFIED BY 'airavata';
2. Create a mysql database in your server with required name for user airavata (Assume name is:persistent_data)

mysql> CREATE DATABASE persistent_data;

GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,DROP
ON persistent_data.*
TO 'airavata'@'localhost'
IDENTIFIED BY 'airavata';
3. Copy mysql driver jar (-5.1.6.jar) to /standalone-server/lib

4. Edit jdbc connection URLs at repository.properties file that resides in /standalone-server/conf. You can achieve this by just uncommenting out parameters under "#configuration for mysql" section and commented-out the default derby configurations (If you followed the above instructions you might not have to change this repository.properties file.)


registry.jdbc.url=jdbc:mysql://localhost:3306/persistent_data
registry.jdbc.driver=com.mysql.jdbc.Driver
5. In the next section we will try to run a workflow in Ranger [1]. Configure the repository.properties file by modifying dummy values for following properties. The XSEDE cert files can be downloaded from [2].

trusted.cert.location=path to certificates for ranger
myproxy.server=myproxy.teragrid.org
myproxy.user=username
myproxy.pass=password
myproxy.life=3600
5. Go to /bin and start the script airavata-server.sh. Now Airavata server will start with MySQL database (in this case persistent_data database).

6. When starting XBaya, it should also point to the same database that we specified when starting Airavata server. In order to do that, copy the same repository.properties file that you have edited in the 2nd step, to /bin folder. Now you are ready to start XBaya. To start XBaya, go to /bin folder and run the script xbaya-gui.sh. Now XBaya will start with the same database that used to start Airavata server.

Registering An application which will run on Ranger

1. Click on XBaya --> Register Application.  Then fill in the following values as it is shown in the diagram.



2. Click New Deployment button. Following dialog box will then appear. Select Ranger as the host from the drop down menu. Then fill in the application that you are going to invoke (in this case /bin/echo) and set the proper scratch directory. The scratch directory can be identified by running the following command on Ranger once you are logged into it through your shell.

echo $SCRATCH


3. Then click "HPC Configuration" button. Select job type as serial. Specify the project account assigned to you. Finally is specify the queue as development as your job will be submitted to this queue. You can configure the other fields as well; but if you don't set them, default values will be added for them. Then click "update" button.


4. Click "Add" button.


5. Click "Register" button.

6. If the registration was successful, following message will appear.


7. Click XBaya Menu and select "New Workflow".


8. Give a name for your workflow and hit "Ok".


9. Your application will be listed under "Application Services" in the left menu.


10. Drag it and drop on to the workflow window.


11. Select "input" and "output" components from Component list. Drag them to the workflow window. Then connect the dots using the mouse pointer as it is shown in the below diagram.


12. Then hit the red colored "play" button on the top left corner. Following dialog will appear. Give an input for your workflow and a experiment name for the workflow run. Then hit "Run". Now your job will be launched on Ranger.



[1] - http://www.tacc.utexas.edu/user-services/user-guides/ranger-user-guide
[2] - https://software.xsede.org/security/xsede-certs.tar.gz
[3] - http://airavata.apache.org/documentation/system/airavata-in-10-minutes.html

How to start the Airavata with MySQL backend

Airavata is running using Apache Derby as the backend database by default. You can use MySQL database instead of default derby database. Following post describe $subject.

1. Create a new user called airavata with password airavata in running MySQL instance.

mysql> CREATE USER 'airavata'@'localhost' IDENTIFIED BY 'airavata';

2. Create a mysql database in your server with required name for user airavata (Assume name is:persistent_data)

mysql> CREATE DATABASE persistent_data;

GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,DROP

ON persistent_data.*
TO 'airavata'@'localhost'
IDENTIFIED BY 'airavata';

3. Copy mysql driver jar (-5.1.6.jar) to AIRAVATA_HOME/standalone-server/lib

4. Edit jdbc connection URLs at repository.properties file that resides in AIRAVATA_HOME/standalone-server/conf. You can achieve this by just uncommenting out parameters under "#configuration for mysql" section and commented-out the default derby configurations (If you followed the above instructions you might not have to change this repository.properties file.)

registry.jdbc.url=jdbc:mysql://localhost:3306/persistent_data
registry.jdbc.driver=com.mysql.jdbc.Driver

5. Go to AIRAVATA_HOME/bin and start the script airavata-server.sh. Now Airavata server will start with MySQL database (in this case persistent_data database).

6. When starting XBaya, it should also point to the same database that we specified when starting Airavata server. In order to do that, copy the same repository.properties file that you have edited in the 2nd step, to AIRAVATA_HOME/bin folder. Now you are ready to start XBaya. To start XBaya, go to AIRAVATA_HOME/bin folder and run the script xbaya-gui.sh. Now XBaya will start with the same database that used to start Airavata server.

Monday, September 10, 2012

Configure Apache Rave for SSL

I had to do $subject for the oa4mp integration work that I am currently doing with Rave. I had to do some configuration changes to get SSL working with Rave. Following are the instructions on how to $subject.

Enabling SSL in Tomcat

Following instructions demonstrate how to get Tomcat 6 running over SSL using a self signed certificate.
  • Find the reverse DNS (of the IP address )of the machine in which you are going to install.
$ host your-ip-address
  •  Then you'll be getting the reverse DNS of the IP address you gave.
 xxx-yy-zzz-hhh.dhcp-bl.xxx.edu
  • Generate a self signed certificate that you'll use with Tomcat.
keytool -genkey -alias tomcat -keyalg RSA -validity 365 -storepass changeit -keystore $JAVA_HOME/jre/lib/security/cacerts

What is your first and last name?
  [Unknown]:  xxx-yy-zzz-hhh.dhcp-bl.xxx.edu
What is the name of your organizational unit?
  [Unknown]:  SGG
What is the name of your organization?
  [Unknown]:  IU
What is the name of your City or Locality?
  [Unknown]:  Bloomington
What is the name of your State or Province?
  [Unknown]:  IN
What is the two-letter country code for this unit?
  [Unknown]:  US
Is CN=xxx-yy-zzz-hhh.dhcp-bl.xxx.edu, OU=SGG, O=IU, L=Bloomington, ST=IN, C=US correct?
  [no]:  yes

Enter key password for
        (RETURN if same as keystore password):
  • Edit Tomcats server.xml to enable an SSL listener on port 443 using our alternate cacerts file. By default Tomcat looks for a certificate with the alias "tomcat" which is what we used to create our self signed certificate. (uncommented the HTTPS connector and configured it to use our custom cacerts file)

<Connector port="443" protocol="HTTP/1.1" SSLEnabled="true"
           maxThreads="150" scheme="https" secure="true"
           keystoreFile="$JAVA_HOME/jre/lib/security/cacerts" keystorePass="changeit"
           clientAuth="false" sslProtocol="TLS" />

Configure Apache Rave and Shindig to run over SSL.

1. Configure properties files.
  • Edit the portal.properties file to configure Apache Rave to use SSL. (updated the following values at the top of the portal.properties config file with)
portal.opensocial_engine.protocol=https
portal.opensocial_engine.root=xxx-yy-zzz-hhh.dhcp-bl.xxx.edu
portal.opensocial_engine.gadget_path=/gadgets
Edit the rave.shindig.properties and  container.js files to configure Shindig to use SSL.
  • The changes to container.js are - search and replace of http:// with https://
  • Updated the following values at the top of the rave.shindig.properties config file with.
shindig.host= xxx-yy-zzz-hhh.dhcp-bl.xxx.edu
shindig.port=
shindig.contextroot=

2. Update the rave-portal pom.
  • Add the following configuration to the cargo plugin. It uses the tomcat server.xml file (configured in the first section) give in the configuration to startup a Tomcat instance.
<configfiles>
    <configfile>
        <file>${project.basedir}/../rave-portal-resources/src/main/dist/conf/tomcat-users.xml</file>
        <todir>conf/</todir>
        <tofile>tomcat-users.xml</tofile>
    </configfile>
    <configfile>
        <file>/home/heshan/Dev/airavata-rave-integration/oauth/rave-0.15-oa4mp-branch/config/server.xml</file>
        <todir>conf/</todir>
        <tofile>server.xml</tofile>
    </configfile>
</configfiles>
  • Build raven project.
mvn clean install
  • Move to the rave-portal module and start Rave using the Cargo plugin.
cd rave-portal
mvn cargo:start
  • Log into the portal using the login page. 
https://156-56-179-232.dhcp-bl.indiana.edu/portal/login

Friday, August 3, 2012

Apache Airavata 0.4-INCUBATING Released


The Apache Airavata (Incubating) team is pleased to announce the immediate availability of the Airavata 0.4-INCUBATING release.

The release can be obtained from the Apache Airavata download page - http://incubator.apache.org/airavata/about/downloads.html


Apache Airavata is a software toolkit currently used to build science gateways but that has a much wider potential use. It provides features to compose, manage, execute, and monitor small to large scale applications and workflows on computational resources ranging from local clusters to national grids and computing clouds. Gadgets interfaces to Airavata back end services can be deployed in open social containers such as Apache Rave and modify them to suit their needs. Airavata builds on general concepts of service oriented computing, distributed messaging, and workflow composition and orchestration.

For general information on Apache Airavata, please visit the project website:

Disclaimer:
 Apache Airavata is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by the Apache Incubator. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects.  While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF.

Friday, July 6, 2012

How to Submit Patches to Apache Airavata


This post describes how an Airavata user can contribute to the Airavata project by submitting patches. User can follow the steps given below.
  • Identify an issue that you want to fix or improve
  • Search JIRA and the mailing list to see if it’s already been discussed
  • If it’s a bug or a feature request, open a JIRA issue
  • Create a sample that you can use for prototyping the feature or demonstrating the bug. If creating a sample is time consuming, write steps to reproduce the issue.
  • Attach this sample to the JIRA issue if it’s representing a bug report.
  • Setup a svn client in your system.
  • Checkout the source code.
  • Make your changes
  • Create the patch:
    • svn add any_files_you_added
    • svn diff > /tmp/fix-AIRAVATA-NNNN.patch
  • Attach that file (/tmp/fix-AIRAVATA-NNNN.patch) to the JIRA

Thursday, July 5, 2012

Deploying Airavata Server on Tomcat


A shell script named setup_tomcat.sh is shipped with Airavata that will assist you to $subject. Following steps describe how you can do it.

1) Update tomcat.hostname and tomcat.port properties of the airavata-tomcat.properties file. You can keep the defaults if you dont want to change ports. In that case you don't have to edit the airavata-tomcat.properties file. This file can be found in AIRAVATA_HOME/tools/airavata-tomcat.properties.

2) Download following to your local file system.
a) apache-tomcat-7.0.28.zip
b) apache-airavata-0.4-incubating-SNAPSHOT-bin.zip
b) axis2-1.5.1-war.zip (Unzip it. When running the script point to the axis2.war)
d) ackrabbit-webapp-2.4.0.war

3) Run the script (setup_tomcat.sh) by providing the full file paths of the files you downloaded. This script can be found in AIRAVATA_HOME/tools/ directory.
./setup_tomcat.sh --tomcat=/home/heshan/Dev/setup/gw8/apache-tomcat-7.0.28.zip --airavata=/home/heshan/Dev/setup/gw8/apache-airavata-0.4-incubating-SNAPSHOT-bin.zip --axis2=/home/heshan/Dev/setup/gw8/axis2.war --jackrabbit=/home/heshan/Dev/setup/gw8/jackrabbit-webapp-2.4.0.war --properties=/home/heshan/Dev/setup/gw8/airavata-tomcat.properties

4) Start Tomcat server.
eg: ./catalina.sh start

5) Before using airavata go to http://localhost:8090/jackrabbit-webapp-2.4.0 and create a default content repository.

6) Restart Tomcat server.

Wednesday, July 4, 2012

Registering Application Descriptors using Airavata Client API

Following post demonstrates how to programmetically register; 1. Host 2. Application 3. Service descriptors using Apache Airavata Client API

import org.apache.airavata.common.registry.api.exception.RegistryException;
import org.apache.airavata.commons.gfac.type.ApplicationDeploymentDescription;
import org.apache.airavata.commons.gfac.type.HostDescription;
import org.apache.airavata.commons.gfac.type.ServiceDescription;
import org.apache.airavata.migrator.registry.MigrationUtil;
import org.apache.airavata.registry.api.AiravataRegistry;
import org.apache.airavata.schemas.gfac.*;

import java.net.MalformedURLException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

public class DescriptorRegistrationSample {

    public static void main(String[] args) {
        Map<String, String> config = new HashMap<String, String>();
        config.put(org.apache.airavata.client.airavata.AiravataClient.MSGBOX,"http://localhost:8090/axis2/services/MsgBoxService");
        config.put(org.apache.airavata.client.airavata.AiravataClient.BROKER, "http://localhost:8090/axis2/services/EventingService");
        config.put(org.apache.airavata.client.airavata.AiravataClient.WORKFLOWSERVICEURL, "http://localhost:8090/axis2/services/WorkflowInterpretor?wsdl");
        config.put(org.apache.airavata.client.airavata.AiravataClient.JCR, "http://localhost:8090/jackrabbit-webapp-2.4.0/rmi");
        config.put(org.apache.airavata.client.airavata.AiravataClient.JCR_USERNAME, "admin");
        config.put(org.apache.airavata.client.airavata.AiravataClient.JCR_PASSWORD, "admin");
        config.put(org.apache.airavata.client.airavata.AiravataClient.GFAC, "http://localhost:8090/axis2/services/GFacService");
        config.put(org.apache.airavata.client.airavata.AiravataClient.WITHLISTENER, "false");
        config.put(org.apache.airavata.client.airavata.AiravataClient.TRUSTED_CERT_LOCATION, "/Users/Downloads/certificates");

        org.apache.airavata.client.airavata.AiravataClient airavataClient = null;
        try {
            airavataClient = new org.apache.airavata.client.airavata.AiravataClient(config);
        } catch (MalformedURLException e) {
            e.printStackTrace();
        }

        // Create Host Description
        HostDescription host = new HostDescription();
        host.getType().changeType(GlobusHostType.type);
        host.getType().setHostName("gram");
        host.getType().setHostAddress("gatekeeper2.ranger.tacc.teragrid.org");
        ((GlobusHostType) host.getType()).
                setGridFTPEndPointArray(new String[]{"gsiftp://gridftp.ranger.tacc.teragrid.org:2811/"});
        ((GlobusHostType) host.getType()).
                setGlobusGateKeeperEndPointArray(new String[]{"gatekeeper.ranger.tacc.teragrid.org:2119/jobmanager-sge"});


        // Create Application Description 
        ApplicationDeploymentDescription appDesc = new ApplicationDeploymentDescription(GramApplicationDeploymentType.type);
        GramApplicationDeploymentType app = (GramApplicationDeploymentType) appDesc.getType();
        app.setCpuCount(1);
        app.setNodeCount(1);
        ApplicationDeploymentDescriptionType.ApplicationName name = appDesc.getType().addNewApplicationName();
        name.setStringValue("EchoMPILocal");
        app.setExecutableLocation("/home/path_to_executable");
        app.setScratchWorkingDirectory("/home/path_to_temporary_directory");
        app.setCpuCount(16);
        app.setJobType(MigrationUtil.getJobTypeEnum("MPI"));
        //app.setMinMemory();
        ProjectAccountType projectAccountType = ((GramApplicationDeploymentType) appDesc.getType()).addNewProjectAccount();
        projectAccountType.setProjectAccountNumber("XXXXXXXX");

        // Create Service Description
        ServiceDescription serv = new ServiceDescription();
        serv.getType().setName("MockPwscfMPIService");

        InputParameterType input = InputParameterType.Factory.newInstance();
        input.setParameterName("echo_input_name");
        ParameterType parameterType = input.addNewParameterType();
        parameterType.setType(DataType.Enum.forString("String"));
        parameterType.setName("String");
        List<InputParameterType> inputList = new ArrayList<InputParameterType>();
        inputList.add(input);
        InputParameterType[] inputParamList = inputList.toArray(new InputParameterType[inputList
                .size()]);

        OutputParameterType output = OutputParameterType.Factory.newInstance();
        output.setParameterName("echo_mpi_output");
        ParameterType parameterType1 = output.addNewParameterType();
        parameterType1.setType(DataType.Enum.forString("String"));
        parameterType1.setName("String");
        List<OutputParameterType> outputList = new ArrayList<OutputParameterType>();
        outputList.add(output);
        OutputParameterType[] outputParamList = outputList
                .toArray(new OutputParameterType[outputList.size()]);
        serv.getType().setInputParametersArray(inputParamList);
        serv.getType().setOutputParametersArray(outputParamList);

        // Save to Registry
        if (airavataClient!=null) {
            System.out.println("Saving to Registry");
            AiravataRegistry jcrRegistry = airavataClient.getRegistry();
            try {
                jcrRegistry.saveHostDescription(host);
                jcrRegistry.saveServiceDescription(serv);
                jcrRegistry.saveDeploymentDescription(serv.getType().getName(), host.getType().getHostName(), appDesc);

                jcrRegistry.deployServiceOnHost(serv.getType().getName(), host.getType().getHostName());
            } catch (RegistryException e) {
                e.printStackTrace();
            }
        }

        System.out.println("DONE");
        
    }

}

Tuesday, July 3, 2012

Airavata Programming API


Apache Airavata's Programming API is the API which is exposed to the Gateway Developers. Gateway Developers can use this API to execute and monitor workflows. The API user should keep in mind that a user can not compose an Airavata workflow (.xwf) using the API. Inorder to do that a user can use the XBaya User Interface. Therefore, other than creation of the workflow; Client API supports all other workflow related operations.


The main motivation behind, having a Client API is that to expose the user to an API that will let him/her to a the persistent information stored in the Registry. The information persisted in the Registry can be;
  • Descriptors
  • Workflow information
  • Workflow provenance information
  • Airavata configuration

Following are the high level usecases which uses Airavata Client API.



Client API Usecases



  1. Registry Operations 
    • Retrieve registry information
    • Access registry information
    • Update registry information
    • Delete registry information
    • Search registry information
  2. Execute workflow
    • Run workflow
    • Set inputs
    • Set workflow node IDs 
  3. Monitoring
  4. Provenance
  5. User Management (This is not yet implemented. It's currently in our Road Map and this is added as a place holder.)
    • User roles
    • Administration



Client API Components



The Client API consists of 5 main components.
  1. Airavata API
    • It is an Aggregator API which contains all the base methods for Airavata API.
  2. Airavata Manager
    • This exposes config related information on Airavata. This currently contains Service URLs only.
  3. Application Manager
    • This will handle operations related to descriptors. Namely;
      1. Host description
      2. Service description
      3. Application description
  4. Execution Manager
    • This can be used to run and monitor workflows.
  5. Provenance Manger
    • This provides API to manage provenance related information. ie. Keeps track of inputs, outputs, etc related to a workflow.
  6. User Manger
    • User management related API is exposed through this. Currently, Airavata does not support User management but it is in Airavata roadmap.
  7. Workflow manager
    • Every operation related to workflows is exposed through this. ie:
      1. saving workflow
      2. deleting workflow
      3. retrieving workflow

Wednesday, June 27, 2012

Apache Airavata Stakeholders

Airavata Users
Airavata is a framework which enables a user to build Science Gateways. It is used to compose, manage, execute and monitor distributed applications and workflows on computational resources. These computational resources can range from local resources to computational grids and clouds. Therefore, various users with different backgrounds either contribute or use Airavata in their applications. From the Airavata standpoint, three main users can be identified.
  • Research Scientists (Gateway End Users)
  • Gateway Developers
  • Core Developers
Now let's focus on each user and how they fit into Airavata's big picture.

 

Gateway End Users

Gateway End Users
Gateway End User is the one who will have a model code to do some scientific application. Sometimes this End User can be a Research Scientist. He/She writes scripts to wrap the applications up and by executing those scripts, they run the scientific workflows in Super Computers. This can be called a scientific experiment. Now the Scientist might have a requirement to call multiple of these applications together and compose a workflow. That's where the Gateway Developer comes into the picture.

 

Gateway Developers

Gateway Developers
The Research Scientist is the one who comes up with requirement of bundling scienitific applications together and composing as a workflow.
The job of the Gateway Developer is to use Airavata and wrap the above mentioned model code and scripts together. Then, scientific workflows are created out these.
Above diagram depicts how Gateway Developer fits into the picture.

 

Core Developers

Core Developers

Core Deveoper is the one who develops and contributes to Airavata framework codebase. The Gateway Developers use the software developed by the Core Developers to create science gateways.

Thursday, June 21, 2012

Apache Airavata 0.3-INCUBATING Released


The Apache Airavata (Incubating) team is pleased to announce the immediate
availability of the Airavata 0.3-INCUBATING release.

The release can be obtained from the Apache Airavata download page - http://incubator.apache.org/airavata/about/downloads.html


Apache Airavata is a software toolkit currently used to build science gateways but that has a much wider potential use. It provides features to compose, manage, execute, and monitor small to large scale applications and workflows on computational resources ranging from local clusters to national grids and computing clouds. Gadgets interfaces to Airavata back end services can be deployed in open social containers such as Apache Rave and modify them to suit their needs. Airavata builds on general concepts of service oriented computing, distributed messaging, and workflow composition and orchestration.

For general information on Apache Airavata, please visit the project website:

Wednesday, June 20, 2012

Programmetically execute an Echo job on Ranger using Apache Airavata


In a earlier post [2] we looked at how to execute a Echo job in Ranger [1] using the XBaya GUI. This post describes how to run the same scenario using a java client. This java client does not use the AiravataClient API but it uses XML Beans generated from Schema to describe and run the MPI job programmetically. I will be writing a test client later, which will be using AiravataClient API.

1. Configure gram.properties file which will be used in the test case. (Let's assume it's named gram_ranger.properties)

# The myproxy server to retrieve the grid credentials
myproxy.server=myproxy.teragrid.org
# Example: XSEDE myproxy server
#myproxy.server=myproxy.teragrid.org
# The user name and password to fetch grid proxy
myproxy.username=username
myproxy.password=********
#Directory with Grid Certification Authority certificates and CRL's
# The certificates for XSEDE can be downloaded from http://software.xsede.org/security/xsede-certs.tar.gz
ca.certificates.directory=/home/heshan/Dev/setup/gram-provider/certificates
# On computational grids, an allocation is awarded with a charge number. On XSEDE, the numbers are typically of the format TG-DIS123456
allocation.charge.number=TG-STA110014S
# The scratch space with ample space to create temporary working directory on target compute cluster
scratch.working.directory=/scratch/01437/ogce/test
# Name, FQDN, and gram and gridftp end points of the remote compute cluster
host.commom.name=gram
host.fqdn.name=gatekeeper2.ranger.tacc.teragrid.org
gridftp.endpoint=gsiftp://gridftp.ranger.tacc.teragrid.org:2811/
gram.endpoints=gatekeeper.ranger.tacc.teragrid.org:2119/jobmanager-sge
defualt.queue=development

2. Using the above configured properties file (gram_ranger.properties) run the test case which will execute the simple MPI job on Ranger.



import org.apache.airavata.commons.gfac.type.ActualParameter;
import org.apache.airavata.commons.gfac.type.ApplicationDeploymentDescription;
import org.apache.airavata.commons.gfac.type.HostDescription;
import org.apache.airavata.commons.gfac.type.ServiceDescription;
import org.apache.airavata.core.gfac.context.invocation.impl.DefaultExecutionContext;
import org.apache.airavata.core.gfac.context.invocation.impl.DefaultInvocationContext;
import org.apache.airavata.core.gfac.context.message.impl.ParameterContextImpl;
import org.apache.airavata.core.gfac.context.security.impl.GSISecurityContext;
import org.apache.airavata.core.gfac.notification.impl.LoggingNotification;
import org.apache.airavata.core.gfac.services.impl.PropertiesBasedServiceImpl;
import org.apache.airavata.registry.api.impl.AiravataJCRRegistry;
import org.apache.airavata.schemas.gfac.*;
import org.junit.Assert;
import org.junit.Before;
import org.junit.Test;

import java.net.URL;
import java.util.*;

import static org.junit.Assert.fail;

public class GramProviderTest {

    public static final String MYPROXY = "myproxy";
    public static final String GRAM_PROPERTIES = "gram_ranger.properties";
    private AiravataJCRRegistry jcrRegistry = null;

    @Before
    public void setUp() throws Exception {
        /*
           * Create database
           */
        Map config = new HashMap();
            config.put("org.apache.jackrabbit.repository.home","target");

        jcrRegistry = new AiravataJCRRegistry(null,
                "org.apache.jackrabbit.core.RepositoryFactoryImpl", "admin",
                "admin", config);
    
        /*
           * Host
           */

        URL url = this.getClass().getClassLoader().getResource(GRAM_PROPERTIES);
        Properties properties = new Properties();
        properties.load(url.openStream());
        HostDescription host = new HostDescription();
        host.getType().changeType(GlobusHostType.type);
        host.getType().setHostName(properties.getProperty("host.commom.name"));
        host.getType().setHostAddress(properties.getProperty("host.fqdn.name"));
        ((GlobusHostType) host.getType()).setGridFTPEndPointArray(new String[]{properties.getProperty("gridftp.endpoint")});
        ((GlobusHostType) host.getType()).setGlobusGateKeeperEndPointArray(new String[]{properties.getProperty("gram.endpoints")});


        /*
        * App
        */
        ApplicationDeploymentDescription appDesc = new ApplicationDeploymentDescription(GramApplicationDeploymentType.type);
        GramApplicationDeploymentType app = (GramApplicationDeploymentType) appDesc.getType();
        app.setCpuCount(1);
        app.setNodeCount(1);
        ApplicationDeploymentDescriptionType.ApplicationName name = appDesc.getType().addNewApplicationName();
        name.setStringValue("EchoLocal");
        app.setExecutableLocation("/bin/echo");
        app.setScratchWorkingDirectory(properties.getProperty("scratch.working.directory"));
        app.setCpuCount(1);
        ProjectAccountType projectAccountType = ((GramApplicationDeploymentType) appDesc.getType()).addNewProjectAccount();
        projectAccountType.setProjectAccountNumber(properties.getProperty("allocation.charge.number"));
        QueueType queueType = app.addNewQueue();
        queueType.setQueueName(properties.getProperty("defualt.queue"));
        app.setMaxMemory(100);
        
        /*
           * Service
           */
        ServiceDescription serv = new ServiceDescription();
        serv.getType().setName("SimpleEcho");

        InputParameterType input = InputParameterType.Factory.newInstance();
        ParameterType parameterType = input.addNewParameterType();
        parameterType.setName("echo_input");
        List inputList = new ArrayList();
        inputList.add(input);
        InputParameterType[] inputParamList = inputList.toArray(new InputParameterType[inputList
                .size()]);

        OutputParameterType output = OutputParameterType.Factory.newInstance();
        ParameterType parameterType1 = output.addNewParameterType();
        parameterType1.setName("echo_output");
        List outputList = new ArrayList();
        outputList.add(output);
        OutputParameterType[] outputParamList = outputList
                .toArray(new OutputParameterType[outputList.size()]);
        serv.getType().setInputParametersArray(inputParamList);
        serv.getType().setOutputParametersArray(outputParamList);

        /*
           * Save to registry
           */
        jcrRegistry.saveHostDescription(host);
        jcrRegistry.saveDeploymentDescription(serv.getType().getName(), host.getType().getHostName(), appDesc);
        jcrRegistry.saveServiceDescription(serv);
        jcrRegistry.deployServiceOnHost(serv.getType().getName(), host.getType().getHostName());
    }

    @Test
    public void testExecute() {
        try {
            URL url = this.getClass().getClassLoader().getResource(GRAM_PROPERTIES);
            Properties properties = new Properties();
            properties.load(url.openStream());

            DefaultInvocationContext ct = new DefaultInvocationContext();
            DefaultExecutionContext ec = new DefaultExecutionContext();
            ec.addNotifiable(new LoggingNotification());
            ec.setRegistryService(jcrRegistry);
            ct.setExecutionContext(ec);


            GSISecurityContext gsiSecurityContext = new GSISecurityContext();
            gsiSecurityContext.setMyproxyServer(properties.getProperty("myproxy.server"));
            gsiSecurityContext.setMyproxyUserName(properties.getProperty("myproxy.username"));
            gsiSecurityContext.setMyproxyPasswd(properties.getProperty("myproxy.password"));
            gsiSecurityContext.setMyproxyLifetime(14400);
            gsiSecurityContext.setTrustedCertLoc(properties.getProperty("ca.certificates.directory"));

            ct.addSecurityContext(MYPROXY, gsiSecurityContext);

            ct.setServiceName("SimpleEcho");

            /*
            * Input
            */
            ParameterContextImpl input = new ParameterContextImpl();
            ActualParameter echo_input = new ActualParameter();
            ((StringParameterType) echo_input.getType()).setValue("echo_output=hello");
            input.add("echo_input", echo_input);

            /*
            * Output
            */
            ParameterContextImpl output = new ParameterContextImpl();
            ActualParameter echo_output = new ActualParameter();
            output.add("echo_output", echo_output);

            // parameter
            ct.setInput(input);
            ct.setOutput(output);

            PropertiesBasedServiceImpl service = new PropertiesBasedServiceImpl();
            service.init();
            service.execute(ct);

            Assert.assertNotNull(ct.getOutput());
            Assert.assertNotNull(ct.getOutput().getValue("echo_output"));
            Assert.assertEquals("hello", ((StringParameterType) ((ActualParameter) ct.getOutput().getValue("echo_output")).getType()).getValue());


        } catch (Exception e) {
            e.printStackTrace();
            fail("ERROR");
        }
    }
}
[1] - http://www.tacc.utexas.edu/user-services/user-guides/ranger-user-guide 
[2] - http://heshans.blogspot.com/2012/06/execute-echo-job-on-ranger-using-apache.html

Execute an Echo job on Ranger using Apache Airavata

In this post we will look at how to execute an Echo application on Ranger [1] using Apache Airavata.

1. Before starting airavata configure repository.properties file by modifying default fake values for following properties.

trusted.cert.location=path to certificates for ranger
myproxy.server=myproxy.teragrid.org
myproxy.user=username
myproxy.pass=password
myproxy.life=3600

Configure your certificate location and my proxy credentials and start Jackrabbit instance and airavata server with all the services.

2. To create a workflow to run on ranger you need to create Host Description, Application Description and a Service Description. If you  don’t know know how to create them refer airavata 10 minutes article [2] to understand how to create those documents. Once you become familiar with XBaya UI, use following values for fields given below and create documents using XBaya GUI.

Host Description
  • Host Address - gatekeeper.ranger.tacc.teragrid.org

Click on the check box - Define this host as Globus host, then following two entries will enable to fillup.

  • Globus Gate Keeper Endpoint - gatekeeper.ranger.tacc.teragrid.org:2119/jobmanager-sge  
  • Grid FTP Endpoint                   - gsiftp://gridftp.ranger.tacc.teragrid.org:2811/
Service Description

Fill the values similarly like in 10 minutes article [2] service description saving part.

Application Description
  • Executable path - /bin/echo
  • Temporary Directory - /tmp

Select the above created service descriptor and host Descriptor.. When you select the above created host descriptor you will see a new button Gram Configuration. Now click on that and fill following values.
  • Job Type - Single
  • Project Account Number - You will see this when you login to ranger, put your project Account number in this field
  • Project Description - Description of the project - not mandetory
  • Queue Type - development

Click on Update button and save the Application Description.

Now you have successfully created Descriptors and create a workflow like you have done in 10 minutes article [2] and try to run it.

[1] - http://www.tacc.utexas.edu/user-services/user-guides/ranger-user-guide
[2] - http://incubator.apache.org/airavata/documentation/system/airavata-in-10-minutes.html

Thursday, June 14, 2012

Apache Airavata ForEach construct

This post will introduce the ForEach construct and what are it's uses. Then we'll look at how to ForEach samples shipped in with Apache Airavata [1]. 

Introduction
Airavata supports parametric sweeps through a construct called Foreach. Given an array of inputs of size n and an application service, it is possible to run the application service n times for each of the input values in the input array. There are two modes of ForEach;
  1. Cartesian product of inputs: Inputs of array sizes n1 ,n2 , .. nk will yield:  
    • n1 ∗ n2 ∗ ... ∗ nk invocations.
  2. Dot product of inputs. Inputs of equal size arrays of size n will yield n computations.
Executing a simple workflow which uses ForEach
Apache Airavata is shipped with a workflow named SimpleForEach.xwf. As the name suggests, it contains a simple workflow which demonstrates the use of ForEach construct. Following are the steps to running the sample workflow.

Instructions to run the sample
  1. Download the latest Airavata release pack from downloads link and extract, for future explanation lets assume the path to extracted directory is AIRAVATA_HOME.

    NOTE: The above mentioned workflow samples are committed to the trunk. Therefore, you are better off using a trunk build.

  2. Now Run the following scripts in the given order below to start the components of Airavata.

    AIRAVATA_HOME/bin/jackrabbit-server.sh - This will start Jackrabbit repository on port 8081.

    AIRAVATA_HOME/bin/airavata-server.sh - This will start SimpleAxis2Server on port 8080

    AIRAVATA_HOME/bin/xbaya-gui.sh - This will start XBaya GUI application.
  3. Click the Xbaya tab and open up an Airavata workflow configuration (.xwf) from file system (sample workflows shipped in with Airavata can be found in AIRAVATA_HOME/samples/workflows) eg. Assume that you selected the SimpleForEach workflow
    • Now Click on the run button (red colored play).
    • Then the workflow will get executed.
    • Finally the result of the workflow will get displayed.
The workflow which we ran early looks like following (Figure 1.)  

Figure 1 : SimpleForEach sample
When running the workflow if you tick the "Execute in cross product" condition in the "Execute Workflow" dialog box; the workflow will be run in cross product. Otherwise, by default a given ForeEach will run in dot product. "Execute Workflow" dialog box is shown in Figure 2.

Figure 2 : Execute Workflow Dialog Box
Results
As you might have noticed in Figure 2, we have given two arrays (containing 1,2 and 3,4) as input to the ForEach construct.

If we ran the dot product, there will be 2 resulting outputs. It can be seen in Figure 3.

Figure 3: Dot product result

If we ran the cross product, there will be 4 resulting outputs. It can be seen in Figure 4. 

Figure 4: Cross product result
Apache Airavata contains another sample named ComplexForEach.xwf, which demonstrates the use of multiple ForEach constructs inside one workflow. Have fun running the ComplexForEach sample

Figure 5: ComplexForEach sample



REFERNCES
[1] - airavata.org

Running Airavata sample workflows


The purpose of this post is to give an understanding of how to run sample workflows shipped in with Airavata. If you are a new user and would like to acquire basic understanding of how to use Airavata, please refer the 5 minute tutorial and then refer the 10 minute tutorial. If you are familiar with Airavata and would like to run a sample workflow, this is the right place for you.


Introduction


This post will explain how to run a workflow, using an existing Airavata Workflow Configuration. Airavata currently, ships sample workflow configurations with it's distribution. The Samples included are;


  1. SimpleMath workflow
  2. ComplexMath workflow
  3. LevenshteinDistance workflow

Note: Currently Airavata will work with Linux distributions and Mac and we do not support all the Apache Airavata components to work on Windows.



Running a sample


  • Download the latest Airavata release pack from downloads link and extract, for future explanation lets assume the path to extracted directory is AIRAVATA_HOME.
  • Now Run the following scripts in the given order below to start the components of Airavata.

AIRAVATA_HOME/bin/jackrabbit-server.sh - This will start Jackrabbit repository on port 8081.

AIRAVATA_HOME/bin/airavata-server.sh - This will start SimpleAxis2Server on port 8080

AIRAVATA_HOME/bin/xbaya-gui.sh - This will start XBaya GUI application.
  • Click the Xbaya tab and open up an Airavata workflow configuration (.xwf) from file system (sample workflows shipped in with Airavata can be found in AIRAVATA_HOME/samples/workflows) eg. Assume that you selected the SimpleMath workflow
    • Now Click on the run button (red colored play).
    • Then the workflow will get executed.
    • Finally the result of the workflow will get displayed.
    • Similarly, other workflows can be executed.

Workflow Samples



Basic samples


  1. SimpleMath workflow
    • This workflow will hand over the inputs to 4 nodes. Then the results will be handed over to another 2 nodes which will then hand over the results to another node. The last node will output the result of the operation. All the nodes considered are doing addition operations.
  2. ComplexMath workflow
    • This workflow will hand over the inputs to 4 nodes which are doing addition operations. Then the outputs(results) will be handed over to another 2 nodes which are doing multiplication operation. The results of the multiplications are handed over to another node. The last node will do addition operation on the input data and output the resulting value.

Advanced Samples


XBaya support Parametric Sweeps which can be used to tackle uncertainty of inputs to a workflow. It supports, Cartesian product and Dot product of inputs.

  1. Levenshtein Distance workflow
    • This workflow will use Airavata's ForEach construct to calculate Levenshtein Distance of strings. This workflow will use cross product to calculate distance.             

Wednesday, June 13, 2012

Programmetically execute simple MPI job on Ranger using Apache Airavata

In a earlier post [2] we looked at how to execute a MPI job in Ranger [1] using the XBaya GUI. This post describes how to run the same scenario using a java client. This java client does not use the AiravataClient API but it uses XML Beans generated from Schema to describe and run the MPI job programmetically. I will be writing a test client later, which will be using AiravataClient API.

1. Configure gram.properties file which will be used in the test case. (Let's assume it's named gram_ranger.properties)
# The myproxy server to retrieve the grid credentials
myproxy.server=myproxy.teragrid.org
# Example: XSEDE myproxy server
#myproxy.server=myproxy.teragrid.org

# The user name and password to fetch grid proxy
myproxy.username=username
myproxy.password=********

#Directory with Grid Certification Authority certificates and CRL's
# The certificates for XSEDE can be downloaded from http://software.xsede.org/security/xsede-certs.tar.gz
ca.certificates.directory=/home/heshan/Dev/setup/gram-provider/certificates

# On computational grids, an allocation is awarded with a charge number. On XSEDE, the numbers are typically of the format TG-DIS123456
allocation.charge.number=TG-STA110014S

# The scratch space with ample space to create temporary working directory on target compute cluster
scratch.working.directory=/scratch/01437/ogce/test

# Name, FQDN, and gram and gridftp end points of the remote compute cluster
host.commom.name=gram
host.fqdn.name=gatekeeper2.ranger.tacc.teragrid.org
gridftp.endpoint=gsiftp://gridftp.ranger.tacc.teragrid.org:2811/
gram.endpoints=gatekeeper.ranger.tacc.teragrid.org:2119/jobmanager-sge
defualt.queue=development
2. Using the above configured properties file (gram_ranger.properties) run the test case which will execute the simple MPI job on Ranger.
import org.apache.airavata.commons.gfac.type.ActualParameter;
import org.apache.airavata.commons.gfac.type.ApplicationDeploymentDescription;
import org.apache.airavata.commons.gfac.type.HostDescription;
import org.apache.airavata.commons.gfac.type.ServiceDescription;
import org.apache.airavata.core.gfac.context.invocation.impl.DefaultExecutionContext;
import org.apache.airavata.core.gfac.context.invocation.impl.DefaultInvocationContext;
import org.apache.airavata.core.gfac.context.message.impl.ParameterContextImpl;
import org.apache.airavata.core.gfac.context.security.impl.GSISecurityContext;
import org.apache.airavata.core.gfac.notification.impl.LoggingNotification;
import org.apache.airavata.core.gfac.services.impl.PropertiesBasedServiceImpl;
import org.apache.airavata.migrator.registry.MigrationUtil;
import org.apache.airavata.registry.api.impl.AiravataJCRRegistry;
import org.apache.airavata.schemas.gfac.*;
import org.junit.Assert;
import org.junit.Before;
import org.junit.Test;

import java.net.URL;
import java.util.*;

import static org.junit.Assert.fail;

public class GramProviderMPIRangerTest {

    public static final String MYPROXY = "myproxy";
    public static final String GRAM_PROPERTIES = "gram_ranger.properties";
    private AiravataJCRRegistry jcrRegistry = null;

    @Before
    public void setUp() throws Exception {
        Map<String,String> config = new HashMap<String,String>();
            config.put("org.apache.jackrabbit.repository.home","target");

        jcrRegistry = new AiravataJCRRegistry(null,
                "org.apache.jackrabbit.core.RepositoryFactoryImpl", "admin",
                "admin", config);

        // Host
        URL url = this.getClass().getClassLoader().getResource(GRAM_PROPERTIES);
        Properties properties = new Properties();
        properties.load(url.openStream());
        HostDescription host = new HostDescription();
        host.getType().changeType(GlobusHostType.type);
        host.getType().setHostName(properties.getProperty("host.commom.name"));
        host.getType().setHostAddress(properties.getProperty("host.fqdn.name"));
        ((GlobusHostType) host.getType()).setGridFTPEndPointArray(new String[]{properties.getProperty("gridftp.endpoint")});
        ((GlobusHostType) host.getType()).setGlobusGateKeeperEndPointArray(new String[]{properties.getProperty("gram.endpoints")});

        /* Application */
        ApplicationDeploymentDescription appDesc = new ApplicationDeploymentDescription(GramApplicationDeploymentType.type);
        GramApplicationDeploymentType app = (GramApplicationDeploymentType) appDesc.getType();
        app.setCpuCount(1);
        app.setNodeCount(1);
        ApplicationDeploymentDescriptionType.ApplicationName name = appDesc.getType().addNewApplicationName();
        name.setStringValue("EchoMPILocal");
        app.setExecutableLocation("/share/home/01437/ogce/airavata-test/mpi-hellow-world");
        app.setScratchWorkingDirectory(properties.getProperty("scratch.working.directory"));
        app.setCpuCount(16);
        app.setJobType(MigrationUtil.getJobTypeEnum("MPI"));
        //app.setMinMemory();
        ProjectAccountType projectAccountType = ((GramApplicationDeploymentType) appDesc.getType()).addNewProjectAccount();
        projectAccountType.setProjectAccountNumber(properties.getProperty("allocation.charge.number"));

        /* Service */
        ServiceDescription serv = new ServiceDescription();
        serv.getType().setName("SimpleMPIEcho");

        InputParameterType input = InputParameterType.Factory.newInstance();
        ParameterType parameterType = input.addNewParameterType();
        parameterType.setName("echo_mpi_input");
        List<InputParameterType> inputList = new ArrayList<InputParameterType>();
        inputList.add(input);
        InputParameterType[] inputParamList = inputList.toArray(new InputParameterType[inputList
                .size()]);

        OutputParameterType output = OutputParameterType.Factory.newInstance();
        ParameterType parameterType1 = output.addNewParameterType();
        parameterType1.setName("echo_mpi_output");
        List<OutputParameterType> outputList = new ArrayList<OutputParameterType>();
        outputList.add(output);
        OutputParameterType[] outputParamList = outputList
                .toArray(new OutputParameterType[outputList.size()]);
        serv.getType().setInputParametersArray(inputParamList);
        serv.getType().setOutputParametersArray(outputParamList);

        /* Save to Registry */
        jcrRegistry.saveHostDescription(host);
        jcrRegistry.saveDeploymentDescription(serv.getType().getName(), host.getType().getHostName(), appDesc);
        jcrRegistry.saveServiceDescription(serv);
        jcrRegistry.deployServiceOnHost(serv.getType().getName(), host.getType().getHostName());
    }

    @Test
    public void testExecute() {
        try {
            URL url = this.getClass().getClassLoader().getResource(GRAM_PROPERTIES);
            Properties properties = new Properties();
            properties.load(url.openStream());

            DefaultInvocationContext ct = new DefaultInvocationContext();
            DefaultExecutionContext ec = new DefaultExecutionContext();
            ec.addNotifiable(new LoggingNotification());
            ec.setRegistryService(jcrRegistry);
            ct.setExecutionContext(ec);


            GSISecurityContext gsiSecurityContext = new GSISecurityContext();
            gsiSecurityContext.setMyproxyServer(properties.getProperty("myproxy.server"));
            gsiSecurityContext.setMyproxyUserName(properties.getProperty("myproxy.username"));
            gsiSecurityContext.setMyproxyPasswd(properties.getProperty("myproxy.password"));
            gsiSecurityContext.setMyproxyLifetime(14400);
            gsiSecurityContext.setTrustedCertLoc(properties.getProperty("ca.certificates.directory"));

            ct.addSecurityContext(MYPROXY, gsiSecurityContext);

            ct.setServiceName("SimpleMPIEcho");

            /* Input */
            ParameterContextImpl input = new ParameterContextImpl();
            ActualParameter echo_input = new ActualParameter();
            ((StringParameterType) echo_input.getType()).setValue("echo_mpi_output=hi");
            input.add("echo_mpi_input", echo_input);

            /* Output */
            ParameterContextImpl output = new ParameterContextImpl();
            ActualParameter echo_output = new ActualParameter();
            output.add("echo_mpi_output", echo_output);

            /* parameter */
            ct.setInput(input);
            ct.setOutput(output);

            PropertiesBasedServiceImpl service = new PropertiesBasedServiceImpl();
            service.init();
            service.execute(ct);

            System.out.println("output              : " + ct.getOutput().toString());
            System.out.println("output from service : " + ct.getOutput().getValue("echo_mpi_output"));

            Assert.assertNotNull(ct.getOutput());
            Assert.assertNotNull(ct.getOutput().getValue("echo_mpi_output"));

            System.out.println("output              : " + ((StringParameterType) ((ActualParameter) ct.getOutput().getValue("echo_mpi_output")).getType()).getValue());

        } catch (Exception e) {
            e.printStackTrace();
            fail("ERROR");
        }
    }
}
[1] - http://www.tacc.utexas.edu/user-services/user-guides/ranger-user-guide 
[2] - http://heshans.blogspot.com/2012/06/execute-simple-mpi-job-on-ranger-using.html

Execute simple MPI job on Ranger using Apache Airavata

In an earlier post[3] we looked at how to install a simple hello world MPI program in Ranger [1]. In this post we will look at how to execute the previously installed application on Ranger using Apache Airavata.

1. Before starting airavata configure repository.properties file by modifying default fake values for following properties.

trusted.cert.location=path to certificates for ranger
myproxy.server=myproxy.teragrid.org
myproxy.user=username
myproxy.pass=password
myproxy.life=3600

Configure your certificate location and my proxy credentials and start Jackrabbit instance and airavata server with all the services.

2. To create a workflow to run on ranger you need to create Host Description, Application Description and a Service Description. If you  don’t know know how to create them refer airavata 10 minutes article [2] to understand how to create those documents. Once you become familiar with XBaya UI, use following values for fields given below and create documents using XBaya GUI.


Host Description
Click on the check box - Define this host as Globus host, then following two entries will enable to fillup.

Service Description

Fill the values similarly like in 10 minutes article [2] service description saving part.



Application Description

  • Executable path - /share/home/01437/ogce/airavata-test/mpi-hellow-world
  • Temporary Directory - /scratch/01437/ogce/test
Select the above created service descriptor and host Descriptor.. When you select the above created host descriptor you will see a new button Gram Configuration. Now click on that and fill following values.
  • Job Type - MPI
  • Project Account Number - You will see this when you login to ranger, put your project Account number in this field

  • Project Description - Description of the project - not mandetory
  • Queue Type - development
Click on Update button and save the Application Description.

Now you have successfully created Descriptors and create a MPI workflow like you have done in 10 minutes article [2] and try to run it.



[1] - http://www.tacc.utexas.edu/user-services/user-guides/ranger-user-guide
[2] - http://incubator.apache.org/airavata/documentation/system/airavata-in-10-minutes.html
[3] - http://heshans.blogspot.com/2012/06/running-simple-mpi-job-on-ranger.html