Wednesday, June 27, 2012

Apache Airavata Stakeholders

Airavata Users
Airavata is a framework which enables a user to build Science Gateways. It is used to compose, manage, execute and monitor distributed applications and workflows on computational resources. These computational resources can range from local resources to computational grids and clouds. Therefore, various users with different backgrounds either contribute or use Airavata in their applications. From the Airavata standpoint, three main users can be identified.
  • Research Scientists (Gateway End Users)
  • Gateway Developers
  • Core Developers
Now let's focus on each user and how they fit into Airavata's big picture.

 

Gateway End Users

Gateway End Users
Gateway End User is the one who will have a model code to do some scientific application. Sometimes this End User can be a Research Scientist. He/She writes scripts to wrap the applications up and by executing those scripts, they run the scientific workflows in Super Computers. This can be called a scientific experiment. Now the Scientist might have a requirement to call multiple of these applications together and compose a workflow. That's where the Gateway Developer comes into the picture.

 

Gateway Developers

Gateway Developers
The Research Scientist is the one who comes up with requirement of bundling scienitific applications together and composing as a workflow.
The job of the Gateway Developer is to use Airavata and wrap the above mentioned model code and scripts together. Then, scientific workflows are created out these.
Above diagram depicts how Gateway Developer fits into the picture.

 

Core Developers

Core Developers

Core Deveoper is the one who develops and contributes to Airavata framework codebase. The Gateway Developers use the software developed by the Core Developers to create science gateways.

Thursday, June 21, 2012

Apache Airavata 0.3-INCUBATING Released


The Apache Airavata (Incubating) team is pleased to announce the immediate
availability of the Airavata 0.3-INCUBATING release.

The release can be obtained from the Apache Airavata download page - http://incubator.apache.org/airavata/about/downloads.html


Apache Airavata is a software toolkit currently used to build science gateways but that has a much wider potential use. It provides features to compose, manage, execute, and monitor small to large scale applications and workflows on computational resources ranging from local clusters to national grids and computing clouds. Gadgets interfaces to Airavata back end services can be deployed in open social containers such as Apache Rave and modify them to suit their needs. Airavata builds on general concepts of service oriented computing, distributed messaging, and workflow composition and orchestration.

For general information on Apache Airavata, please visit the project website:

Wednesday, June 20, 2012

Programmetically execute an Echo job on Ranger using Apache Airavata


In a earlier post [2] we looked at how to execute a Echo job in Ranger [1] using the XBaya GUI. This post describes how to run the same scenario using a java client. This java client does not use the AiravataClient API but it uses XML Beans generated from Schema to describe and run the MPI job programmetically. I will be writing a test client later, which will be using AiravataClient API.

1. Configure gram.properties file which will be used in the test case. (Let's assume it's named gram_ranger.properties)

# The myproxy server to retrieve the grid credentials
myproxy.server=myproxy.teragrid.org
# Example: XSEDE myproxy server
#myproxy.server=myproxy.teragrid.org
# The user name and password to fetch grid proxy
myproxy.username=username
myproxy.password=********
#Directory with Grid Certification Authority certificates and CRL's
# The certificates for XSEDE can be downloaded from http://software.xsede.org/security/xsede-certs.tar.gz
ca.certificates.directory=/home/heshan/Dev/setup/gram-provider/certificates
# On computational grids, an allocation is awarded with a charge number. On XSEDE, the numbers are typically of the format TG-DIS123456
allocation.charge.number=TG-STA110014S
# The scratch space with ample space to create temporary working directory on target compute cluster
scratch.working.directory=/scratch/01437/ogce/test
# Name, FQDN, and gram and gridftp end points of the remote compute cluster
host.commom.name=gram
host.fqdn.name=gatekeeper2.ranger.tacc.teragrid.org
gridftp.endpoint=gsiftp://gridftp.ranger.tacc.teragrid.org:2811/
gram.endpoints=gatekeeper.ranger.tacc.teragrid.org:2119/jobmanager-sge
defualt.queue=development

2. Using the above configured properties file (gram_ranger.properties) run the test case which will execute the simple MPI job on Ranger.



import org.apache.airavata.commons.gfac.type.ActualParameter;
import org.apache.airavata.commons.gfac.type.ApplicationDeploymentDescription;
import org.apache.airavata.commons.gfac.type.HostDescription;
import org.apache.airavata.commons.gfac.type.ServiceDescription;
import org.apache.airavata.core.gfac.context.invocation.impl.DefaultExecutionContext;
import org.apache.airavata.core.gfac.context.invocation.impl.DefaultInvocationContext;
import org.apache.airavata.core.gfac.context.message.impl.ParameterContextImpl;
import org.apache.airavata.core.gfac.context.security.impl.GSISecurityContext;
import org.apache.airavata.core.gfac.notification.impl.LoggingNotification;
import org.apache.airavata.core.gfac.services.impl.PropertiesBasedServiceImpl;
import org.apache.airavata.registry.api.impl.AiravataJCRRegistry;
import org.apache.airavata.schemas.gfac.*;
import org.junit.Assert;
import org.junit.Before;
import org.junit.Test;

import java.net.URL;
import java.util.*;

import static org.junit.Assert.fail;

public class GramProviderTest {

    public static final String MYPROXY = "myproxy";
    public static final String GRAM_PROPERTIES = "gram_ranger.properties";
    private AiravataJCRRegistry jcrRegistry = null;

    @Before
    public void setUp() throws Exception {
        /*
           * Create database
           */
        Map config = new HashMap();
            config.put("org.apache.jackrabbit.repository.home","target");

        jcrRegistry = new AiravataJCRRegistry(null,
                "org.apache.jackrabbit.core.RepositoryFactoryImpl", "admin",
                "admin", config);
    
        /*
           * Host
           */

        URL url = this.getClass().getClassLoader().getResource(GRAM_PROPERTIES);
        Properties properties = new Properties();
        properties.load(url.openStream());
        HostDescription host = new HostDescription();
        host.getType().changeType(GlobusHostType.type);
        host.getType().setHostName(properties.getProperty("host.commom.name"));
        host.getType().setHostAddress(properties.getProperty("host.fqdn.name"));
        ((GlobusHostType) host.getType()).setGridFTPEndPointArray(new String[]{properties.getProperty("gridftp.endpoint")});
        ((GlobusHostType) host.getType()).setGlobusGateKeeperEndPointArray(new String[]{properties.getProperty("gram.endpoints")});


        /*
        * App
        */
        ApplicationDeploymentDescription appDesc = new ApplicationDeploymentDescription(GramApplicationDeploymentType.type);
        GramApplicationDeploymentType app = (GramApplicationDeploymentType) appDesc.getType();
        app.setCpuCount(1);
        app.setNodeCount(1);
        ApplicationDeploymentDescriptionType.ApplicationName name = appDesc.getType().addNewApplicationName();
        name.setStringValue("EchoLocal");
        app.setExecutableLocation("/bin/echo");
        app.setScratchWorkingDirectory(properties.getProperty("scratch.working.directory"));
        app.setCpuCount(1);
        ProjectAccountType projectAccountType = ((GramApplicationDeploymentType) appDesc.getType()).addNewProjectAccount();
        projectAccountType.setProjectAccountNumber(properties.getProperty("allocation.charge.number"));
        QueueType queueType = app.addNewQueue();
        queueType.setQueueName(properties.getProperty("defualt.queue"));
        app.setMaxMemory(100);
        
        /*
           * Service
           */
        ServiceDescription serv = new ServiceDescription();
        serv.getType().setName("SimpleEcho");

        InputParameterType input = InputParameterType.Factory.newInstance();
        ParameterType parameterType = input.addNewParameterType();
        parameterType.setName("echo_input");
        List inputList = new ArrayList();
        inputList.add(input);
        InputParameterType[] inputParamList = inputList.toArray(new InputParameterType[inputList
                .size()]);

        OutputParameterType output = OutputParameterType.Factory.newInstance();
        ParameterType parameterType1 = output.addNewParameterType();
        parameterType1.setName("echo_output");
        List outputList = new ArrayList();
        outputList.add(output);
        OutputParameterType[] outputParamList = outputList
                .toArray(new OutputParameterType[outputList.size()]);
        serv.getType().setInputParametersArray(inputParamList);
        serv.getType().setOutputParametersArray(outputParamList);

        /*
           * Save to registry
           */
        jcrRegistry.saveHostDescription(host);
        jcrRegistry.saveDeploymentDescription(serv.getType().getName(), host.getType().getHostName(), appDesc);
        jcrRegistry.saveServiceDescription(serv);
        jcrRegistry.deployServiceOnHost(serv.getType().getName(), host.getType().getHostName());
    }

    @Test
    public void testExecute() {
        try {
            URL url = this.getClass().getClassLoader().getResource(GRAM_PROPERTIES);
            Properties properties = new Properties();
            properties.load(url.openStream());

            DefaultInvocationContext ct = new DefaultInvocationContext();
            DefaultExecutionContext ec = new DefaultExecutionContext();
            ec.addNotifiable(new LoggingNotification());
            ec.setRegistryService(jcrRegistry);
            ct.setExecutionContext(ec);


            GSISecurityContext gsiSecurityContext = new GSISecurityContext();
            gsiSecurityContext.setMyproxyServer(properties.getProperty("myproxy.server"));
            gsiSecurityContext.setMyproxyUserName(properties.getProperty("myproxy.username"));
            gsiSecurityContext.setMyproxyPasswd(properties.getProperty("myproxy.password"));
            gsiSecurityContext.setMyproxyLifetime(14400);
            gsiSecurityContext.setTrustedCertLoc(properties.getProperty("ca.certificates.directory"));

            ct.addSecurityContext(MYPROXY, gsiSecurityContext);

            ct.setServiceName("SimpleEcho");

            /*
            * Input
            */
            ParameterContextImpl input = new ParameterContextImpl();
            ActualParameter echo_input = new ActualParameter();
            ((StringParameterType) echo_input.getType()).setValue("echo_output=hello");
            input.add("echo_input", echo_input);

            /*
            * Output
            */
            ParameterContextImpl output = new ParameterContextImpl();
            ActualParameter echo_output = new ActualParameter();
            output.add("echo_output", echo_output);

            // parameter
            ct.setInput(input);
            ct.setOutput(output);

            PropertiesBasedServiceImpl service = new PropertiesBasedServiceImpl();
            service.init();
            service.execute(ct);

            Assert.assertNotNull(ct.getOutput());
            Assert.assertNotNull(ct.getOutput().getValue("echo_output"));
            Assert.assertEquals("hello", ((StringParameterType) ((ActualParameter) ct.getOutput().getValue("echo_output")).getType()).getValue());


        } catch (Exception e) {
            e.printStackTrace();
            fail("ERROR");
        }
    }
}
[1] - http://www.tacc.utexas.edu/user-services/user-guides/ranger-user-guide 
[2] - http://heshans.blogspot.com/2012/06/execute-echo-job-on-ranger-using-apache.html

Execute an Echo job on Ranger using Apache Airavata

In this post we will look at how to execute an Echo application on Ranger [1] using Apache Airavata.

1. Before starting airavata configure repository.properties file by modifying default fake values for following properties.

trusted.cert.location=path to certificates for ranger
myproxy.server=myproxy.teragrid.org
myproxy.user=username
myproxy.pass=password
myproxy.life=3600

Configure your certificate location and my proxy credentials and start Jackrabbit instance and airavata server with all the services.

2. To create a workflow to run on ranger you need to create Host Description, Application Description and a Service Description. If you  don’t know know how to create them refer airavata 10 minutes article [2] to understand how to create those documents. Once you become familiar with XBaya UI, use following values for fields given below and create documents using XBaya GUI.

Host Description
  • Host Address - gatekeeper.ranger.tacc.teragrid.org

Click on the check box - Define this host as Globus host, then following two entries will enable to fillup.

  • Globus Gate Keeper Endpoint - gatekeeper.ranger.tacc.teragrid.org:2119/jobmanager-sge  
  • Grid FTP Endpoint                   - gsiftp://gridftp.ranger.tacc.teragrid.org:2811/
Service Description

Fill the values similarly like in 10 minutes article [2] service description saving part.

Application Description
  • Executable path - /bin/echo
  • Temporary Directory - /tmp

Select the above created service descriptor and host Descriptor.. When you select the above created host descriptor you will see a new button Gram Configuration. Now click on that and fill following values.
  • Job Type - Single
  • Project Account Number - You will see this when you login to ranger, put your project Account number in this field
  • Project Description - Description of the project - not mandetory
  • Queue Type - development

Click on Update button and save the Application Description.

Now you have successfully created Descriptors and create a workflow like you have done in 10 minutes article [2] and try to run it.

[1] - http://www.tacc.utexas.edu/user-services/user-guides/ranger-user-guide
[2] - http://incubator.apache.org/airavata/documentation/system/airavata-in-10-minutes.html

Thursday, June 14, 2012

Apache Airavata ForEach construct

This post will introduce the ForEach construct and what are it's uses. Then we'll look at how to ForEach samples shipped in with Apache Airavata [1]. 

Introduction
Airavata supports parametric sweeps through a construct called Foreach. Given an array of inputs of size n and an application service, it is possible to run the application service n times for each of the input values in the input array. There are two modes of ForEach;
  1. Cartesian product of inputs: Inputs of array sizes n1 ,n2 , .. nk will yield:  
    • n1 ∗ n2 ∗ ... ∗ nk invocations.
  2. Dot product of inputs. Inputs of equal size arrays of size n will yield n computations.
Executing a simple workflow which uses ForEach
Apache Airavata is shipped with a workflow named SimpleForEach.xwf. As the name suggests, it contains a simple workflow which demonstrates the use of ForEach construct. Following are the steps to running the sample workflow.

Instructions to run the sample
  1. Download the latest Airavata release pack from downloads link and extract, for future explanation lets assume the path to extracted directory is AIRAVATA_HOME.

    NOTE: The above mentioned workflow samples are committed to the trunk. Therefore, you are better off using a trunk build.

  2. Now Run the following scripts in the given order below to start the components of Airavata.

    AIRAVATA_HOME/bin/jackrabbit-server.sh - This will start Jackrabbit repository on port 8081.

    AIRAVATA_HOME/bin/airavata-server.sh - This will start SimpleAxis2Server on port 8080

    AIRAVATA_HOME/bin/xbaya-gui.sh - This will start XBaya GUI application.
  3. Click the Xbaya tab and open up an Airavata workflow configuration (.xwf) from file system (sample workflows shipped in with Airavata can be found in AIRAVATA_HOME/samples/workflows) eg. Assume that you selected the SimpleForEach workflow
    • Now Click on the run button (red colored play).
    • Then the workflow will get executed.
    • Finally the result of the workflow will get displayed.
The workflow which we ran early looks like following (Figure 1.)  

Figure 1 : SimpleForEach sample
When running the workflow if you tick the "Execute in cross product" condition in the "Execute Workflow" dialog box; the workflow will be run in cross product. Otherwise, by default a given ForeEach will run in dot product. "Execute Workflow" dialog box is shown in Figure 2.

Figure 2 : Execute Workflow Dialog Box
Results
As you might have noticed in Figure 2, we have given two arrays (containing 1,2 and 3,4) as input to the ForEach construct.

If we ran the dot product, there will be 2 resulting outputs. It can be seen in Figure 3.

Figure 3: Dot product result

If we ran the cross product, there will be 4 resulting outputs. It can be seen in Figure 4. 

Figure 4: Cross product result
Apache Airavata contains another sample named ComplexForEach.xwf, which demonstrates the use of multiple ForEach constructs inside one workflow. Have fun running the ComplexForEach sample

Figure 5: ComplexForEach sample



REFERNCES
[1] - airavata.org

Running Airavata sample workflows


The purpose of this post is to give an understanding of how to run sample workflows shipped in with Airavata. If you are a new user and would like to acquire basic understanding of how to use Airavata, please refer the 5 minute tutorial and then refer the 10 minute tutorial. If you are familiar with Airavata and would like to run a sample workflow, this is the right place for you.


Introduction


This post will explain how to run a workflow, using an existing Airavata Workflow Configuration. Airavata currently, ships sample workflow configurations with it's distribution. The Samples included are;


  1. SimpleMath workflow
  2. ComplexMath workflow
  3. LevenshteinDistance workflow

Note: Currently Airavata will work with Linux distributions and Mac and we do not support all the Apache Airavata components to work on Windows.



Running a sample


  • Download the latest Airavata release pack from downloads link and extract, for future explanation lets assume the path to extracted directory is AIRAVATA_HOME.
  • Now Run the following scripts in the given order below to start the components of Airavata.

AIRAVATA_HOME/bin/jackrabbit-server.sh - This will start Jackrabbit repository on port 8081.

AIRAVATA_HOME/bin/airavata-server.sh - This will start SimpleAxis2Server on port 8080

AIRAVATA_HOME/bin/xbaya-gui.sh - This will start XBaya GUI application.
  • Click the Xbaya tab and open up an Airavata workflow configuration (.xwf) from file system (sample workflows shipped in with Airavata can be found in AIRAVATA_HOME/samples/workflows) eg. Assume that you selected the SimpleMath workflow
    • Now Click on the run button (red colored play).
    • Then the workflow will get executed.
    • Finally the result of the workflow will get displayed.
    • Similarly, other workflows can be executed.

Workflow Samples



Basic samples


  1. SimpleMath workflow
    • This workflow will hand over the inputs to 4 nodes. Then the results will be handed over to another 2 nodes which will then hand over the results to another node. The last node will output the result of the operation. All the nodes considered are doing addition operations.
  2. ComplexMath workflow
    • This workflow will hand over the inputs to 4 nodes which are doing addition operations. Then the outputs(results) will be handed over to another 2 nodes which are doing multiplication operation. The results of the multiplications are handed over to another node. The last node will do addition operation on the input data and output the resulting value.

Advanced Samples


XBaya support Parametric Sweeps which can be used to tackle uncertainty of inputs to a workflow. It supports, Cartesian product and Dot product of inputs.

  1. Levenshtein Distance workflow
    • This workflow will use Airavata's ForEach construct to calculate Levenshtein Distance of strings. This workflow will use cross product to calculate distance.             

Wednesday, June 13, 2012

Programmetically execute simple MPI job on Ranger using Apache Airavata

In a earlier post [2] we looked at how to execute a MPI job in Ranger [1] using the XBaya GUI. This post describes how to run the same scenario using a java client. This java client does not use the AiravataClient API but it uses XML Beans generated from Schema to describe and run the MPI job programmetically. I will be writing a test client later, which will be using AiravataClient API.

1. Configure gram.properties file which will be used in the test case. (Let's assume it's named gram_ranger.properties)
# The myproxy server to retrieve the grid credentials
myproxy.server=myproxy.teragrid.org
# Example: XSEDE myproxy server
#myproxy.server=myproxy.teragrid.org

# The user name and password to fetch grid proxy
myproxy.username=username
myproxy.password=********

#Directory with Grid Certification Authority certificates and CRL's
# The certificates for XSEDE can be downloaded from http://software.xsede.org/security/xsede-certs.tar.gz
ca.certificates.directory=/home/heshan/Dev/setup/gram-provider/certificates

# On computational grids, an allocation is awarded with a charge number. On XSEDE, the numbers are typically of the format TG-DIS123456
allocation.charge.number=TG-STA110014S

# The scratch space with ample space to create temporary working directory on target compute cluster
scratch.working.directory=/scratch/01437/ogce/test

# Name, FQDN, and gram and gridftp end points of the remote compute cluster
host.commom.name=gram
host.fqdn.name=gatekeeper2.ranger.tacc.teragrid.org
gridftp.endpoint=gsiftp://gridftp.ranger.tacc.teragrid.org:2811/
gram.endpoints=gatekeeper.ranger.tacc.teragrid.org:2119/jobmanager-sge
defualt.queue=development
2. Using the above configured properties file (gram_ranger.properties) run the test case which will execute the simple MPI job on Ranger.
import org.apache.airavata.commons.gfac.type.ActualParameter;
import org.apache.airavata.commons.gfac.type.ApplicationDeploymentDescription;
import org.apache.airavata.commons.gfac.type.HostDescription;
import org.apache.airavata.commons.gfac.type.ServiceDescription;
import org.apache.airavata.core.gfac.context.invocation.impl.DefaultExecutionContext;
import org.apache.airavata.core.gfac.context.invocation.impl.DefaultInvocationContext;
import org.apache.airavata.core.gfac.context.message.impl.ParameterContextImpl;
import org.apache.airavata.core.gfac.context.security.impl.GSISecurityContext;
import org.apache.airavata.core.gfac.notification.impl.LoggingNotification;
import org.apache.airavata.core.gfac.services.impl.PropertiesBasedServiceImpl;
import org.apache.airavata.migrator.registry.MigrationUtil;
import org.apache.airavata.registry.api.impl.AiravataJCRRegistry;
import org.apache.airavata.schemas.gfac.*;
import org.junit.Assert;
import org.junit.Before;
import org.junit.Test;

import java.net.URL;
import java.util.*;

import static org.junit.Assert.fail;

public class GramProviderMPIRangerTest {

    public static final String MYPROXY = "myproxy";
    public static final String GRAM_PROPERTIES = "gram_ranger.properties";
    private AiravataJCRRegistry jcrRegistry = null;

    @Before
    public void setUp() throws Exception {
        Map<String,String> config = new HashMap<String,String>();
            config.put("org.apache.jackrabbit.repository.home","target");

        jcrRegistry = new AiravataJCRRegistry(null,
                "org.apache.jackrabbit.core.RepositoryFactoryImpl", "admin",
                "admin", config);

        // Host
        URL url = this.getClass().getClassLoader().getResource(GRAM_PROPERTIES);
        Properties properties = new Properties();
        properties.load(url.openStream());
        HostDescription host = new HostDescription();
        host.getType().changeType(GlobusHostType.type);
        host.getType().setHostName(properties.getProperty("host.commom.name"));
        host.getType().setHostAddress(properties.getProperty("host.fqdn.name"));
        ((GlobusHostType) host.getType()).setGridFTPEndPointArray(new String[]{properties.getProperty("gridftp.endpoint")});
        ((GlobusHostType) host.getType()).setGlobusGateKeeperEndPointArray(new String[]{properties.getProperty("gram.endpoints")});

        /* Application */
        ApplicationDeploymentDescription appDesc = new ApplicationDeploymentDescription(GramApplicationDeploymentType.type);
        GramApplicationDeploymentType app = (GramApplicationDeploymentType) appDesc.getType();
        app.setCpuCount(1);
        app.setNodeCount(1);
        ApplicationDeploymentDescriptionType.ApplicationName name = appDesc.getType().addNewApplicationName();
        name.setStringValue("EchoMPILocal");
        app.setExecutableLocation("/share/home/01437/ogce/airavata-test/mpi-hellow-world");
        app.setScratchWorkingDirectory(properties.getProperty("scratch.working.directory"));
        app.setCpuCount(16);
        app.setJobType(MigrationUtil.getJobTypeEnum("MPI"));
        //app.setMinMemory();
        ProjectAccountType projectAccountType = ((GramApplicationDeploymentType) appDesc.getType()).addNewProjectAccount();
        projectAccountType.setProjectAccountNumber(properties.getProperty("allocation.charge.number"));

        /* Service */
        ServiceDescription serv = new ServiceDescription();
        serv.getType().setName("SimpleMPIEcho");

        InputParameterType input = InputParameterType.Factory.newInstance();
        ParameterType parameterType = input.addNewParameterType();
        parameterType.setName("echo_mpi_input");
        List<InputParameterType> inputList = new ArrayList<InputParameterType>();
        inputList.add(input);
        InputParameterType[] inputParamList = inputList.toArray(new InputParameterType[inputList
                .size()]);

        OutputParameterType output = OutputParameterType.Factory.newInstance();
        ParameterType parameterType1 = output.addNewParameterType();
        parameterType1.setName("echo_mpi_output");
        List<OutputParameterType> outputList = new ArrayList<OutputParameterType>();
        outputList.add(output);
        OutputParameterType[] outputParamList = outputList
                .toArray(new OutputParameterType[outputList.size()]);
        serv.getType().setInputParametersArray(inputParamList);
        serv.getType().setOutputParametersArray(outputParamList);

        /* Save to Registry */
        jcrRegistry.saveHostDescription(host);
        jcrRegistry.saveDeploymentDescription(serv.getType().getName(), host.getType().getHostName(), appDesc);
        jcrRegistry.saveServiceDescription(serv);
        jcrRegistry.deployServiceOnHost(serv.getType().getName(), host.getType().getHostName());
    }

    @Test
    public void testExecute() {
        try {
            URL url = this.getClass().getClassLoader().getResource(GRAM_PROPERTIES);
            Properties properties = new Properties();
            properties.load(url.openStream());

            DefaultInvocationContext ct = new DefaultInvocationContext();
            DefaultExecutionContext ec = new DefaultExecutionContext();
            ec.addNotifiable(new LoggingNotification());
            ec.setRegistryService(jcrRegistry);
            ct.setExecutionContext(ec);


            GSISecurityContext gsiSecurityContext = new GSISecurityContext();
            gsiSecurityContext.setMyproxyServer(properties.getProperty("myproxy.server"));
            gsiSecurityContext.setMyproxyUserName(properties.getProperty("myproxy.username"));
            gsiSecurityContext.setMyproxyPasswd(properties.getProperty("myproxy.password"));
            gsiSecurityContext.setMyproxyLifetime(14400);
            gsiSecurityContext.setTrustedCertLoc(properties.getProperty("ca.certificates.directory"));

            ct.addSecurityContext(MYPROXY, gsiSecurityContext);

            ct.setServiceName("SimpleMPIEcho");

            /* Input */
            ParameterContextImpl input = new ParameterContextImpl();
            ActualParameter echo_input = new ActualParameter();
            ((StringParameterType) echo_input.getType()).setValue("echo_mpi_output=hi");
            input.add("echo_mpi_input", echo_input);

            /* Output */
            ParameterContextImpl output = new ParameterContextImpl();
            ActualParameter echo_output = new ActualParameter();
            output.add("echo_mpi_output", echo_output);

            /* parameter */
            ct.setInput(input);
            ct.setOutput(output);

            PropertiesBasedServiceImpl service = new PropertiesBasedServiceImpl();
            service.init();
            service.execute(ct);

            System.out.println("output              : " + ct.getOutput().toString());
            System.out.println("output from service : " + ct.getOutput().getValue("echo_mpi_output"));

            Assert.assertNotNull(ct.getOutput());
            Assert.assertNotNull(ct.getOutput().getValue("echo_mpi_output"));

            System.out.println("output              : " + ((StringParameterType) ((ActualParameter) ct.getOutput().getValue("echo_mpi_output")).getType()).getValue());

        } catch (Exception e) {
            e.printStackTrace();
            fail("ERROR");
        }
    }
}
[1] - http://www.tacc.utexas.edu/user-services/user-guides/ranger-user-guide 
[2] - http://heshans.blogspot.com/2012/06/execute-simple-mpi-job-on-ranger-using.html

Execute simple MPI job on Ranger using Apache Airavata

In an earlier post[3] we looked at how to install a simple hello world MPI program in Ranger [1]. In this post we will look at how to execute the previously installed application on Ranger using Apache Airavata.

1. Before starting airavata configure repository.properties file by modifying default fake values for following properties.

trusted.cert.location=path to certificates for ranger
myproxy.server=myproxy.teragrid.org
myproxy.user=username
myproxy.pass=password
myproxy.life=3600

Configure your certificate location and my proxy credentials and start Jackrabbit instance and airavata server with all the services.

2. To create a workflow to run on ranger you need to create Host Description, Application Description and a Service Description. If you  don’t know know how to create them refer airavata 10 minutes article [2] to understand how to create those documents. Once you become familiar with XBaya UI, use following values for fields given below and create documents using XBaya GUI.


Host Description
Click on the check box - Define this host as Globus host, then following two entries will enable to fillup.

Service Description

Fill the values similarly like in 10 minutes article [2] service description saving part.



Application Description

  • Executable path - /share/home/01437/ogce/airavata-test/mpi-hellow-world
  • Temporary Directory - /scratch/01437/ogce/test
Select the above created service descriptor and host Descriptor.. When you select the above created host descriptor you will see a new button Gram Configuration. Now click on that and fill following values.
  • Job Type - MPI
  • Project Account Number - You will see this when you login to ranger, put your project Account number in this field

  • Project Description - Description of the project - not mandetory
  • Queue Type - development
Click on Update button and save the Application Description.

Now you have successfully created Descriptors and create a MPI workflow like you have done in 10 minutes article [2] and try to run it.



[1] - http://www.tacc.utexas.edu/user-services/user-guides/ranger-user-guide
[2] - http://incubator.apache.org/airavata/documentation/system/airavata-in-10-minutes.html
[3] - http://heshans.blogspot.com/2012/06/running-simple-mpi-job-on-ranger.html

Transfer a local file to Ranger using Airavata Client

This post will describe how to transfer a local file to Ranger [1] using Airavata Client.


NOTE: This post assumes that AiravataClient, GFac and the local_file will be in the same machine.

1. Before starting Airavata server, configure repository.properties file by modifying default fake values for following properties.
trusted.cert.location=path to certificates for ranger
myproxy.server=myproxy.teragrid.org
myproxy.user=username
myproxy.pass=password
myproxy.life=3600

Configure your certificate location and my proxy credentials and start Jackrabbit instance and airavata server with all the services.

2. To create a workflow to run on ranger you need to create Host Description, Application Description and a Service Description. If you  don’t know know how to create them refer airavata 10 minutes article [2] to understand how to create those documents. Once you become familiar with XBaya UI, use following values for fields given below and create documents using XBaya GUI.

Host Description
  • Host Address      - gatekeeper.ranger.tacc.teragrid.org
Click on the check box - Define this host as Globus host, then following two entries will enable to fillup.
  • Globus Gate Keeper Endpoint - gatekeeper.ranger.tacc.teragrid.org:2119/jobmanager-sge
  • Grid FTP Endpoint           - gsiftp://gridftp.ranger.tacc.teragrid.org:2811/
Service Description

Create a Service description as follows.

IO Parameter Name Type
input echo_input URI
output echo_output URI


Application Description
  • Executable path  - /share/home/01437/ogce/airavata-test/dir/file-breed.sh
  • Temporary Directory  -  /scratch/01437/ogce/test
Select the above created service descriptor and host Descriptor.. When you select the above created host descriptor you will see a new button Gram Configuration .. now click on that and fill following values.
  • Job Type  - Single
  • Project Account Number  - You will see this when you login to ranger, put your project Account number in this field
  • Project Description  - Description of the project - not mandetory
  • Queue Type  - development
Click on Update button and save the Application Description.

3. Running the workflow.

i) Using XBaya
Then Compose your workflow using the  service  created in Application Services. Now hit the red colored play button to run the workflow. It will prompt for a input file. Then give the input file path in following format (The file which you want to copy to Ranger).
file:/home/heshan/Dev/testing/airavata/temp.txt

Save the workflow in the registry (Xbaya Menu → Export → To Registry). This will be used when invoking the workflow using the AiravataClient.

ii) Using Airavata Client
public class AiravataClientTest {

    @Test
   public void testInvokeWorkflowString() {
       try {
           AiravataClient airavataClient = new AiravataClient("xbaya.properties");
           List<String> workflowTemplateIds = airavataClient.getWorkflowTemplateIds();
           Iterator<String> iterator = workflowTemplateIds.iterator();
           while(iterator.hasNext()) {
               String next = iterator.next();
               System.out.println("workflow : " + next);

               List<WorkflowInput> workflowInputs = airavataClient.getWorkflowInputs(next);
               Iterator<WorkflowInput> workflowInputIterator = workflowInputs.iterator();
               while(workflowInputIterator.hasNext()) {
                   WorkflowInput input = workflowInputIterator.next();
                   System.out.println("Name            :" + input.getName());
                   System.out.println("Type            :" + input.getType());
                   System.out.println("Value           :" + input.getValue());
                   System.out.println("Default Value   :" + input.getDefaultValue());
                   // System.out.println("input " + input);
               }

           }

           String workflowTemplateId = "GridFtp-local-workflow";
           List<WorkflowInput> workflowInputs = airavataClient.getWorkflowInputs(workflowTemplateId);
           String topicId = airavataClient.runWorkflow(workflowTemplateId, workflowInputs);

           WorkflowExecution workflowExecutionData = airavataClient.getWorkflowExecutionData(topicId);

       } catch (RegistryException e1) {
           e1.printStackTrace();  //To change body of catch statement use File | Settings | File Templates.
       } catch (IOException e1) {
           e1.printStackTrace();  //To change body of catch statement use File | Settings | File Templates.
       } catch (Exception e) {
           e.printStackTrace();  //To change body of catch statement use File | Settings | File Templates.
       }
    }
}

[1] - http://www.tacc.utexas.edu/user-services/user-guides/ranger-user-guide
[2] - http://incubator.apache.org/airavata/documentation/system/airavata-in-10-minutes.html

Running a simple MPI job on Ranger

Let's consider running a simple MPI job on Ranger [1]. The MPI program considered here will be a hello-world program.

1) Write a hello world application in C.
#include <stdio.h>
#include <mpi.h>


int main (argc, argv)
     int argc;
     char *argv[];
{
  int rank, size;

  MPI_Init (&argc, &argv);        /* starts MPI */
  MPI_Comm_rank (MPI_COMM_WORLD, &rank);        /* get current process id */
  MPI_Comm_size (MPI_COMM_WORLD, &size);        /* get number of processes */
  printf( "Hello world from process %d of %d\n", rank, size );
  MPI_Finalize();
  return 0;
}

2) Compile it in Ranger.
mpicc -o mpi-hellow-world mpi-hellow-world.c

3) Write a schedular script to run your application. (Let's assume that we have saved it under the name scheduler_sge_job_mpi_helloworld)
#!/bin/bash
# Grid Engine batch job script built by Globus job manager

#$ -S /bin/bash
#$ -V
#$ -pe 16way 16
#$ -N MPI-Airavata-Testing-Script
#$ -M heshan@ogce.org
#$ -m n
#$ -q development
#$ -A ***********
#$ -l h_rt=0:09:00
#$ -o /share/home/01437/ogce/airavata-test/mpi-hello.stdout
#$ -e /share/home/01437/ogce/airavata-test/mpi-hello.stderr
ibrun /share/home/01437/ogce/airavata-test/mpi-hellow-world

4) Use the qsub command to submit a batch job to Ranger.
ogce@login3.ranger.tacc.utexas.edu:/airavata-test/{13}> qsub scheduler_sge_job_mpi_helloworld
Once the job is submitted following output can be seen.
-------------------------------------------------------------------
------- Welcome to TACC's Ranger System, an NSF XD Resource -------
-------------------------------------------------------------------
--> Checking that you specified -V...
--> Checking that you specified a time limit...
--> Checking that you specified a queue...
--> Setting project...
--> Checking that you specified a parallel environment...
--> Checking that you specified a valid parallel environment name...
--> Checking that the minimum and maximum PE counts are the same...
--> Checking that the number of PEs requested is valid...
--> Ensuring absence of dubious h_vmem,h_data,s_vmem,s_data limits...
--> Requesting valid memory configuration (31.3G)...
--> Verifying WORK file-system availability...
--> Verifying HOME file-system availability...
--> Verifying SCRATCH file-system availability...
--> Checking ssh setup...
--> Checking that you didn't request more cores than the maximum...
--> Checking that you don't already have the maximum number of jobs...
--> Checking that you don't already have the maximum number of jobs in queue development...
--> Checking that your time limit isn't over the maximum...
--> Checking available allocation...
--> Submitting job...


Your job 2518464 ("MPI-Airavata-Testing-Script2") has been submitted

5) Using the qstat command check the status of the job.
ogce@login3.ranger.tacc.utexas.edu:/airavata-test/{17}> qstat
job-ID  prior   name       user         state submit/start at     queue                          slots ja-task-ID 
-----------------------------------------------------------------------------------------------------------------
2518464 0.00000 MPI-Airava ogce         qw    04/19/2012 11:42:01                                   16        

6) The result of the batch job is written to the specified output file.
TACC: Setting memory limits for job 2518464 to unlimited KB
TACC: Dumping job script:
--------------------------------------------------------------------------------
#!/bin/bash
# Grid Engine batch job script built by Globus job manager

#$ -S /bin/bash
#$ -V
#$ -pe 16way 16
#$ -N MPI-Airavata-Testing-Script2
#$ -M ***@ogce.org
#$ -m n
#$ -q development
#$ -A TG-STA110014S
#$ -l h_rt=0:09:00
#$ -o /share/home/01437/ogce/airavata-test/mpi-hello.stdout
#$ -e /share/home/01437/ogce/airavata-test/mpi-hello.stderr
ibrun /share/home/01437/ogce/airavata-test/mpi-hellow-world
--------------------------------------------------------------------------------
TACC: Done.
TACC: Starting up job 2518464
TACC: Setting up parallel environment for OpenMPI mpirun.
TACC: Setup complete. Running job script.
TACC: starting parallel tasks...
echo_mpi_output=Hello world from process 7 of 16
echo_mpi_output=Hello world from process 6 of 16
echo_mpi_output=Hello world from process 2 of 16
echo_mpi_output=Hello world from process 4 of 16
echo_mpi_output=Hello world from process 3 of 16
echo_mpi_output=Hello world from process 10 of 16
echo_mpi_output=Hello world from process 13 of 16
echo_mpi_output=Hello world from process 9 of 16
echo_mpi_output=Hello world from process 8 of 16
echo_mpi_output=Hello world from process 0 of 16
echo_mpi_output=Hello world from process 1 of 16
echo_mpi_output=Hello world from process 12 of 16

[1] - http://www.tacc.utexas.edu/user-services/user-guides/ranger-user-guide