Rubicon Red’s MyST, Winner Of 2015 Queensland AIIA iAward – New Product Category

Rubicon Red is proud to announce success at this year’s Queensland AIIA iAwards, with Rubicon Red’s MyST software, taking out the New Product Category.This award further reinforces Rubicon Red’s commitment to thought leadership, innovation and expertise, and will see Rubicon Red’s MyST solution progress to the National iAwards event, scheduled for August 2015.The AIIA iAwards New Product award recognises an outstanding ICT product developed by an Australian organisation. The New Product must have moved on from the conceptual stage and into production and sales. The New Product category was judged against the following criteria
  • Functionality
  • Marketability
  • Quality Of Technology
  • Uniqueness

MyST

About MyST

MyST delivers a zero coding, 100% automated DevOps experience for Oracle Middleware. Launched in 2013, MyST has established a market leading position, with rapid adoption in Australia, and now the US, securing Fortune 200 and ASX200 customers.

About The iAwards

For over 20 years the iAwards has been recognising and celebrating the achievements and innovation made in ICT across all areas of the economy. The iAwards honours companies at the cutting edge of technology innovation and celebrates the up and coming innovators of the future. The iAwards provides the platform to discover, recognise and reward ICT innovations that have the potential to, or are already significantly impacting the community. The iAwards are judged by the industry and provide recognition that extends across all sectors of the digital economy.
iawards500x500

DevOps and Continuous Delivery for Oracle SOA and BPM

The goal of Continuous Delivery and DevOps is to help software development teams drive waste out of their process by simultaneously automating the process of software delivery and reducing the batch size of their work. This allows organizations to rapidly, reliably, and repeatedly deliver software enhancements faster, with less risk and less cost.

DevOps

Continuous Integration (CI) is the practice of automatically building and testing a piece of software; either each time code is committed by a developer or in environments with a large number of small commits, or a long-running build on a regular scheduled basis.

Continuous Delivery (CD) goes a step further to automate the build, packaging, deployment, and regression testing, so that it can be released at any time into production. Continuous deployment takes this another step further, in that code is automatically deployed into production, rather than when the business decides to release the code.

DevOps (development and operations) builds on Continuous Delivery and is used to mean a type of agile and collaborative relationship between Development and IT Operations. The goal of DevOps is to change and improve the relationship between Development and Operations by advocating better communication and collaboration to enable the business to deploy features into production quickly, with minimum risk and to detect and quickly correct problems when they do occur, without disrupting other services.

Work in Small Batches

The batch size is the unit at which code under development is promoted between stages, such as SIT, UAT, and Pre-Prod, in the development process. Under a traditional development process, the code from multiple developers working for weeks or months is batched up and integrated together. During the integration process, numerous defects will be surfaced. Some will be the result of a lack of unit testing, but many will be down to invalid assumptions about the various pieces of code developed in isolation and how they will work together as part of the overall solution.

This is especially the case for Oracle SOA and BPM projects, which involve integrating multiple systems together. It is a common mistake for all parties to agree on the interfaces between the systems and then go off and code independently, with each party making invalid assumptions about how the other systems will behave. The bigger the batch, the longer these assumptions remain undiscovered, and the greater the number of defects in the batch. A significant amount of the time taken to fix a defect is actually spent trying to isolate the problem and determine the root cause, rather than fixing the problem.

The issue with a big batch is that many of the defects are interwoven, and that the volume of code that needs be analyzed to troubleshoot a defect is greater. In addition, code based on invalid assumptions can often require significant re-work once these invalid assumptions are discovered; the longer these remain undiscovered, the greater the amount of invalid code written and the greater the amount of re-work required. As a result, the amount of time taken to identify and fix defects increases exponentially with the batch size.

Continuous delivery promotes the use of small batches, where new features are developed incrementally and promoted into the various test environments on a regular and frequent basis. Small batches mean problems are caught immediately and instantly localized, making it far simpler to identify the root cause and fix. In the case of invalid assumptions, these are discovered far earlier in the process, when they are cheap to fix, and results in higher-quality software.

Software components that are implemented in isolation are full of assumptions about the other components with which they will be integrated. The sooner we can identify these assumptions, the smaller the impact and the associated waste will be. Small batches enable us to integrate these components earlier in their respective development lifecycles, and thus reduce the risk and overall impact on the project.

Process for Releasing / Deploying Software MUST be Repeatable and Reliable

To enable the development team to work in small batches, we need to remove the waste in the current build and deployment process. This requires that the process for releasing/deploying software MUST be efficient, repeatable, and reliable.

This is achieved by automating each step in the software delivery process, as manual steps will quickly get in the way, become a bottleneck, or risk introducing unintended variation. This means automating the build and deployment of code, the provisioning of middleware environments, plus the testing of code.

Minimise Differences Between Environments

A common anti-pattern is deploying to a production-like environment only after development is complete. It is unfortunately all too common for solutions to fail on first deployment to any environment. Small inconsistencies between environments, such as disparities in the configuration of deployed SOA/BPM composites and OSB services, adapter configurations, WebLogic resources, or applied patches can cause issues with deployed code that are difficult to diagnose and rectify. This means that there can be almost no confidence that a particular software release will work successfully if it has never been tested in a production-like environments. To avoid this, deployment should always be to production-like environments. Each time we make a deployment to any environment, we are making changes to that environment, which means that it is no longer in-alignment with production. If the release passes, and the code gets promoted through to the next stage and into production then that is not an issue. But if the release fails, we need to restore the environment back to its pre-deployment state, prior to deploying the next release.

Build Quality In

W. Edwards Deming, in his famous management guideline, stated:

"Cease dependence on mass inspection to achieve quality and improve the process and build quality into the product in the first place”.

This means ensuring improvement and quality assurance at every step of the value stream instead of testing just the final product for compliance to requirement.

For software, it translates to writing automated tests at multiple levels (unit, component, and acceptance) and automating their execution as part of the build – test – deployment pipeline.

This way, whenever a commit happens (which is a change being made to either of the application, its configuration, or the environment and software stack that it runs on), an instance of the pipeline runs and so do the automated tests that verify / validate business expectations in form of test cases.

Applying Continuous Delivery in the development of Oracle Middleware projects can deliver significant reductions in development time and costs.

Download White Paper

In subsequent posts I will go into further details on how we can apply DevOps and Continuous Delivery to Oracle BPM and SOA projects. Click here download a white paper on Best Practice for Implementing Continuous Delivery for Oracle Middleware.

Monitoring DB Growth for FMW

While working on the customer site, I am sure most of us would have encountered the following questions(or more) from the clients (especially the DBA team):

  • How to track the table space growth for fusion middleware products and SOA-INFRA in particular?
  • When a new business process is deployed in to an environment, how to determine the  amount of table space its instances will require over time?
  • What will be the impact of increasing the SOA audit level on the table space growth ?

Its very difficult to answer the above questions upfront, but if we implement a monitoring solution on the table space growth, we could potentially be able to answer them. This blog aims at providing a solution that monitors the database growth both at the table and tablespace level.

This solution can be scheduled at a  weekly/Monthly frequency on production to monitor how the table space is growing (or) can be used to capture the snapshot before & after a load test to understand how much growth the table space has undergone. This could act as a vital statistic for increasing the table space whenever a new business process is rolled out to production.

The solution contains two tables to hold the db growth statistics :

  • SCH_TBL_SIZE_STATS_HDR - Captures the table space level growth statistics
  • SCH_TBL_SIZE_STATS_DTL - Captures the table level growth statistics

The below script creates the above mentioned tables and ideally requires to be created under a schema that has the DBA privileges to monitor any required tablespace:

  1. /**
  2. #####################################################################
  3. Table Spec - SCH_TBL_SIZE_STATS_HDR & SCH_TBL_SIZE_STATS_DTL
  4. #####################################################################
  5. @schema_table_size_stats_tbl_script.sql
  6. Tables to contain the statistics regarding the tablespace size growth by schema.
  7. Copyright Rubicon Red Pty Ltd
  8. Author - gkrishna
  9. **/
  10. DROP TABLE SCH_TBL_SIZE_STATS_DTL
  11. /
  12. DROP TABLE SCH_TBL_SIZE_STATS_HDR
  13. /
  14. DROP SEQUENCE SCH_TBL_SIZE_STATS_DTL_SEQ
  15. /
  16. DROP SEQUENCE SCH_TBL_SIZE_STATS_HDR_SEQ
  17. /
  18. CREATE SEQUENCE SCH_TBL_SIZE_STATS_HDR_SEQ
  19.   START WITH 1 INCREMENT BY 1 NOCACHE;
  20. /
  21. CREATE SEQUENCE SCH_TBL_SIZE_STATS_DTL_SEQ START WITH 1 INCREMENT BY 1 NOCACHE;
  22. /
  23. CREATE
  24.     TABLE SCH_TBL_SIZE_STATS_HDR
  25.     (
  26.       SCH_TBL_SIZE_STATS_HDR_ID NUMBER(18) PRIMARY KEY,
  27.       OWNER_SCHEMA              VARCHAR2(30) NOT NULL,
  28.       RUN_DATE                  DATE NOT NULL,
  29.       MB_ALLOCATED              NUMBER NOT NULL,
  30.       MB_FREE                   NUMBER NOT NULL,
  31.       MB_USED                   NUMBER NOT NULL,
  32.       PCT_FREE                  NUMBER NOT NULL,
  33.       PCT_USED                  NUMBER NOT NULL
  34.     );
  35.   /
  36. CREATE
  37.     TABLE SCH_TBL_SIZE_STATS_DTL
  38.     (
  39.       SCH_TBL_SIZE_STATS_DTL_ID NUMBER(18) PRIMARY KEY,
  40.       SCH_TBL_SIZE_STATS_HDR_ID NUMBER(18) NOT NULL,
  41.       TABLE_NAME                VARCHAR2(30) NOT NULL,
  42.       NO_OF_ROWS                NUMBER(15) NOT NULL,
  43.       TABLE_SIZE_IN_MB          NUMBER,
  44.       CONSTRAINT SCH_TBL_SIZE_STATS_HDR_FK FOREIGN KEY(
  45.       SCH_TBL_SIZE_STATS_HDR_ID) REFERENCES SCH_TBL_SIZE_STATS_HDR(
  46.       SCH_TBL_SIZE_STATS_HDR_ID)
  47.     );
  48.   /

The below package contains the procedure 'GATHER_SCHEMA_TABLE_SIZE' to gather the table space growth statistics and it needs to be created/compiled on the same schema as the above tables:

Package Specification

  1. --Package Specification
  2. CREATE OR REPLACE PACKAGE SCH_TBL_SIZE_STATS_PKG
  3. AS
  4.   --Type to hold the list of schema for which the statistics needs to be calculated.
  5.   TYPE SCHEMA_LIST IS TABLE OF VARCHAR2(30);
  6.   -- Procedure to gather the schema statistics
  7.   PROCEDURE GATHER_SCHEMA_TABLE_SIZE(
  8.       P_SCHEMA_LIST IN SCH_TBL_SIZE_STATS_PKG.SCHEMA_LIST);
  9.   -- Procedure to clean up the stats before re-runs for the same day
  10.   PROCEDURE CLEANUP_STATS(
  11.       P_SCHEMA_NAME IN VARCHAR2 ,
  12.       P_RUN_DATE    IN DATE);
  13. END SCH_TBL_SIZE_STATS_PKG;
  14. /

Package Body

  1. --Package Body
  2. CREATE OR REPLACE PACKAGE BODY SCH_TBL_SIZE_STATS_PKG
  3. AS
  4. -- Procedure to gather the schema statistics
  5. PROCEDURE GATHER_SCHEMA_TABLE_SIZE(
  6.     P_SCHEMA_LIST IN SCH_TBL_SIZE_STATS_PKG.SCHEMA_LIST)
  7. IS
  8.   CURSOR LIST_SCHEMA_TABLES_CUR(P_OWNER VARCHAR2)
  9.   IS
  10.     SELECT
  11.       OBJECT_ID,
  12.       OBJECT_NAME
  13.     FROM
  14.       DBA_OBJECTS
  15.     WHERE
  16.       OBJECT_TYPE = 'TABLE'
  17.     AND OWNER     = P_OWNER
  18.     AND STATUS    = 'VALID'
  19.     AND GENERATED = 'N'
  20.     AND OBJECT_NAME NOT LIKE '%$%'; --System tables.
  21.   l_index  NUMBER;
  22.   l_hdr_id       NUMBER(18);
  23.   l_dtl_id       NUMBER(18);
  24.   l_schema_found VARCHAR2(1);
  25. BEGIN
  26.   FOR l_index IN P_SCHEMA_LIST.FIRST .. P_SCHEMA_LIST.LAST
  27.   LOOP
  28.     -- check to make sure the schema exists otherwise just continue with the
  29.     -- rest of the schemas in the list.
  30.     BEGIN
  31.       SELECT
  32.         'Y'
  33.       INTO
  34.         l_schema_found
  35.       FROM
  36.         DBA_USERS
  37.       WHERE
  38.         USERNAME = P_SCHEMA_LIST(l_index);
  39.     EXCEPTION
  40.     WHEN NO_DATA_FOUND THEN
  41.       dbms_output.put_line('Invalid Schema'||P_SCHEMA_LIST(l_index));
  42.       CONTINUE;
  43.     END;
  44.     --clean up the statistics if it already exists for the day.
  45.     CLEANUP_STATS(P_SCHEMA_LIST(l_index),SYSDATE);
  46.     -- getting the primary key value for the header table.
  47.     SELECT
  48.       SCH_TBL_SIZE_STATS_HDR_SEQ.NEXTVAL
  49.     INTO
  50.       l_hdr_id
  51.     FROM
  52.       DUAL;
  53.     -- populating the header table with schema level details
  54.     INSERT
  55.     INTO
  56.       SCH_TBL_SIZE_STATS_HDR
  57.       (
  58.         SCH_TBL_SIZE_STATS_HDR_ID,
  59.         OWNER_SCHEMA,
  60.         RUN_DATE,
  61.         MB_ALLOCATED,
  62.         MB_FREE,
  63.         MB_USED,
  64.         PCT_FREE,
  65.         PCT_USED
  66.       )
  67.     SELECT
  68.       *
  69.     FROM
  70.       (
  71.         SELECT
  72.           l_hdr_id,
  73.           a.tablespace_name OWNER_SCHEMA,
  74.           SYSDATE RUN_DATE,
  75.           ROUND(a.bytes /1048576,2) MB_ALLOCATED,
  76.           ROUND(b.bytes /1048576,2) MB_FREE ,
  77.           ROUND((a.bytes-b.bytes)/1048576,2) MB_USED,
  78.           ROUND(b.bytes /a.bytes * 100,2) PCT_FREE,
  79.           ROUND((a.bytes-b.bytes)/a.bytes,2) * 100 PCT_USED
  80.         FROM
  81.           (
  82.             SELECT
  83.               tablespace_name,
  84.               SUM(a.bytes) bytes
  85.             FROM
  86.               DBA_DATA_FILES a
  87.             GROUP BY
  88.               tablespace_name
  89.           )
  90.           a,
  91.           (
  92.             SELECT
  93.               a.tablespace_name,
  94.               NVL(SUM(b.bytes),0) bytes
  95.             FROM
  96.              DBA_DATA_FILES a,
  97.              DBA_FREE_SPACE b
  98.             WHERE
  99.               a.tablespace_name = b.tablespace_name (+)
  100.             AND a.file_id       = b.file_id (+)
  101.             GROUP BY
  102.               a.tablespace_name
  103.           )
  104.           b,
  105.           DBA_TABLESPACES c
  106.         WHERE
  107.           a.tablespace_name   = b.tablespace_name(+)
  108.         AND a.tablespace_name = c.tablespace_name
  109.         AND a.tablespace_name = P_SCHEMA_LIST(l_index)
  110.         ORDER BY
  111.           a.tablespace_name
  112.       );
  113.     -- Now find all the non-system tables in the schema and then populate the
  114.     -- statistics
  115.     -- to the detail table
  116.     FOR tab IN LIST_SCHEMA_TABLES_CUR(P_SCHEMA_LIST(l_index))
  117.     LOOP
  118.       -- make sure we compute the statistics first before calculating the table
  119.       -- size.
  120.       EXECUTE immediate 'ANALYZE TABLE '||P_SCHEMA_LIST(l_index)||'.'||tab.OBJECT_NAME||
  121.       ' COMPUTE STATISTICS';
  122.       -- getting the primary key value for the detail table.
  123.       SELECT
  124.         SCH_TBL_SIZE_STATS_DTL_SEQ.NEXTVAL
  125.       INTO
  126.         l_dtl_id
  127.       FROM
  128.         DUAL;
  129.       -- populating the statistics for each table.
  130.       INSERT
  131.       INTO
  132.         SCH_TBL_SIZE_STATS_DTL
  133.         (
  134.           SCH_TBL_SIZE_STATS_DTL_ID,
  135.           SCH_TBL_SIZE_STATS_HDR_ID,
  136.           TABLE_NAME,
  137.           NO_OF_ROWS,
  138.           TABLE_SIZE_IN_MB
  139.         )
  140.       SELECT
  141.         l_dtl_id,
  142.         l_hdr_id,
  143.         table_name,
  144.         NVL(num_rows,0) ,
  145.         (
  146.           SELECT
  147.             SUM(bytes_in_mb) AS total_size_in_mb
  148.           FROM
  149.             (
  150.               SELECT
  151.                 dbs.bytes/(1024)/(1024) AS bytes_in_mb
  152.               FROM
  153.                 dba_segments dbs,
  154.                 dba_lobs dbl
  155.               WHERE
  156.                 dbl.table_name    =tab.OBJECT_NAME
  157.               AND dbs.segment_name=dbl.segment_name
  158.               UNION
  159.               SELECT
  160.                 dbs.bytes/(1024)/(1024) AS bytes_in_mb
  161.               FROM
  162.                 dba_segments dbs,
  163.                 dba_indexes dbi
  164.               WHERE
  165.                 dbi.table_name    =tab.OBJECT_NAME
  166.               AND dbs.segment_name=dbi.index_name
  167.               UNION
  168.               SELECT
  169.                 dbs.bytes/(1024)/(1024) AS bytes_in_mb
  170.               FROM
  171.                 dba_segments dbs,
  172.                 dba_tables dbt
  173.               WHERE
  174.                 dbt.table_name    =tab.OBJECT_NAME
  175.               AND dbs.segment_name=dbt.table_name
  176.             )
  177.             tbl_size
  178.         ) AS total_size_in_mb
  179.       FROM
  180.         dba_tables tbl
  181.       WHERE
  182.         tbl.table_name=tab.OBJECT_NAME
  183.       AND tbl.owner   =P_SCHEMA_LIST(l_index);
  184.       -- May be we need a better strategy here for commit.. for now
  185.       -- this should be ok
  186.       COMMIT;
  187.     END LOOP;
  188.   END LOOP;
  189. END GATHER_SCHEMA_TABLE_SIZE;
  190. -- Procedure to clean up the stats before re-runs for the same day
  191. PROCEDURE CLEANUP_STATS(
  192.     P_SCHEMA_NAME IN VARCHAR2 ,
  193.     P_RUN_DATE    IN DATE)
  194. IS
  195. BEGIN
  196.   --deleting the detail table statistics for the given schema
  197.   DELETE
  198.   FROM
  199.     SCH_TBL_SIZE_STATS_DTL
  200.   WHERE
  201.     SCH_TBL_SIZE_STATS_HDR_ID IN
  202.     (
  203.       SELECT
  204.         SCH_TBL_SIZE_STATS_HDR_ID
  205.       FROM
  206.         SCH_TBL_SIZE_STATS_HDR
  207.       WHERE
  208.         TRUNC(RUN_DATE) = TRUNC(P_RUN_DATE)
  209.       AND OWNER_SCHEMA  = P_SCHEMA_NAME
  210.     );
  211.   --deleting the header table statistics for the given schema
  212.   DELETE
  213.   FROM
  214.     SCH_TBL_SIZE_STATS_HDR
  215.   WHERE
  216.         TRUNC(RUN_DATE) = TRUNC(P_RUN_DATE)
  217.     AND OWNER_SCHEMA  = P_SCHEMA_NAME;
  218.     COMMIT;
  219. END CLEANUP_STATS;
  220. END SCH_TBL_SIZE_STATS_PKG;
  221. /

Now lets execute the procedure to analyze the statistics for SOA-INFRA schema:

  1. DECLARE
  2. l_schema_list SCH_TBL_SIZE_STATS_PKG.SCHEMA_LIST;
  3. BEGIN
  4.   l_schema_list := SCH_TBL_SIZE_STATS_PKG.SCHEMA_LIST('DEV_SOAINFRA');
  5.   SCH_TBL_SIZE_STATS_PKG.GATHER_SCHEMA_TABLE_SIZE(l_schema_list);
  6. END;
  7. /

Once the script finishes, lets check the generated data:

  1. SELECT * from SCH_TBL_SIZE_STATS_HDR

FMW DB Growth statistics tablespace level

The above result set shows the statistics for 25th & 26th September 2014 and it can be seen that there is a growth of 20 MB over all in the schema. Querying the detail table should give us the statistics at the table level:

  1. SELECT * from SCH_TBL_SIZE_STATS_DTL WHERE SCH_TBL_SIZE_STATS_HDR_ID = 9 ORDER BY TABLE_SIZE_IN_MB DESC

FMW DB Growth statistics table level

Above is the snapshot of table-level growth for 26th September and simple queries can be written to calculate the difference based on the previous run dates to identify how much the table has grown in rows/size.

Implementing GPS Detection in MAF

Oracle MAF does not currently provide an out of the box solution for detecting whether GPS is enabled (at least as of version 2.1.2). This is problematic when using a function such as startLocationMonitor, as when there is no GPS the app will lock up for about 15 seconds and then display an ADF exception that cannot be caught.

This article will show one approach to solving this problem.

Step 1. Create the MAF app

We’re going to set up an app with a single welcome feature, containing a page which displays our gps status. We’ll also put a call to start Location Monitor which will trigger off of the gps status.

Step through the new MAF application wizard – I have called my app GPSTest and given it an application prefix of com.rubiconred.test.gpstest.  Create a feature called welcome and set it to be an amx page. Create the page and call it welcome.amx.

We need to put the call to the Cordova plugin somewhere – so let’s embed that in a new Javascript page. Right click on the View Controller-> Web Content directory and choose to Create Javascript file. Call it gps.js and add this to the feature reference as shown below.

1. create gps.js

2. maf-application.xml

At this point we are missing the crucial piece – the Cordova plugin! The one used in this example can be found at https://github.com/fastrde/phonegap-checkGPS . Note that there seem to now be several different plugins available to do similar things. Extract the zip and then edit the maf-application.xml to point to the plugin directory. Also tick the geolocation plugin as this is a dependency (if this isn’t ticked then the Jdeveloper build will connect and download the plugin).

3. plugins configuration

Update gps.js and paste the following.

CheckGPS.check(function(){

    //GPS is enabled!

    alert("GPS is available");

  },

  function(){

    //GPS is disabled!

alert('GPS is not available');

  });

Finally, to aid in testing, drag the ApplicationFeatures->resetFeature(String) method onto the welcome.amx page to replace the command button in the primary facet.  At the prompt enter the feature id, which in this case is com.rubiconred.test.gpstest.welcome.

<?xml version="1.0" encoding="UTF-8" ?>

<amx:view xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:amx="http://xmlns.oracle.com/adf/mf/amx"

xmlns:dvtm="http://xmlns.oracle.com/adf/mf/amx/dvt">

  <amx:panelPage id="pp1">

    <amx:facet name="header">

<amx:outputText value="Header" id="ot1"/>

    </amx:facet>

    <amx:facet name="primary">

      <amx:commandButton actionListener="#{bindings.resetFeature.execute}" text="resetFeature"

disabled="#{!bindings.resetFeature.enabled}" id="cb3"/>

    </amx:facet>

    <amx:facet name="secondary">

<amx:commandButton id="cb2"/>

    </amx:facet>

</amx:panelPage>

</amx:view>

Deploy the app, check that Location is on and launch the app.

4. GPS is available

Turn off location on the device, click resetFeature and the following should be displayed.

5. GPS is not available

Step 2. Extend to a bean

It’ll be easier within the app if this data was available through a managed bean.  Mainly as it is easier to embed into EL expressions and also abstracts us away from a having to invoke the javascript from various places. To do this create a new bean called GPSBean and paste the following:-

public class GPSBean {

    private PropertyChangeSupport propertyChangeSupport = new PropertyChangeSupport(this);

    public GPSBean() {

        super();

    }

    private boolean status;

    public void setStatus(boolean status) {

        boolean oldStatus = this.status;

        this.status = status;

propertyChangeSupport.firePropertyChange("status", oldStatus, status);

    }

    public boolean isStatus() {

             return status;

    }

    public void addPropertyChangeListener(PropertyChangeListener l) {

propertyChangeSupport.addPropertyChangeListener(l);

    }

    public void removePropertyChangeListener(PropertyChangeListener l) {

propertyChangeSupport.removePropertyChangeListener(l);

    }

}

This defines an object with a single field of status.  Update the adfc-mobile-config.xml to add this object into the pageFlowScope as shown below.

6. bean definition

The subtle change to make is to modify the getter for the status field, through adding a line to invoke a Javascript function that will be responsible for deriving the new value. Update the isStatus function as follows:-

    public boolean isStatus() {

        String result = (String)AdfmfContainerUtilities.invokeContainerJavaScriptFunction("com.rubiconred.test.gpstest.welcome","application.checkGPSStatus", new Object[] {} );

        return status;

    }

This makes a call out to a checkGPSStatus method which has to be added to gps.js. Paste the following over the existing javascript. Notice that the old direct call toCheckGPS.check has been removed and is now embedded in the checkGPSStatus function.  This also stops it running each time the page is loaded.

(function() {

if (!window.application) window.application = {};

          application.checkGPSStatus = function() {

CheckGPS.check(function(){

    //GPS is enabled!

adf.mf.api.invokeMethod("com.rubiconred.test.gpstest.mobile.GPSBean", "setGPSStatus", true , onInvokeSuccess, onFail);

  },

  function(){

    //GPS is disabled!

adf.mf.api.invokeMethod("com.rubiconred.test.gpstest.mobile.GPSBean", "setGPSStatus", false , onInvokeSuccess, onFail);

  });

  return true;

};

    function onFail() {

     //   alert("It failed");

    };

    function onInvokeSuccess(param) {

    };

})();

This new Javascript function makes a call to setGPSStatus on the GPSBean. This will be used to trigger the setting of the status field. Copy the following method intoGPSBean

   public void setGPSStatus(boolean status) {

         ValueExpression ve = AdfmfJavaUtilities.getValueExpression("#{pageFlowScope.gpsBean.status}", Boolean.class);

         ve.setValue(AdfmfJavaUtilities.getAdfELContext(), status);

    }

Note: The Javascript function is calling the Class directly (not an Object instance) it is necessary to call out to the pageFlowScope object we are using.

Finally, the previous method of resetting the welcome page was a bit of a kludge. It was needed to force the page to refresh and for the Javascript to re-run on page load. Let’s do this a different way, by adding a button to the page to do the trigger of the status check.  Aligned with this, we’ll add some output text fields that are based on the status of the field, via an EL expression. Copy the following below onto the welcome page.

<amx:outputText value="GPS is ON" id="ot2" rendered="#{pageFlowScope.gpsBean.status}"/>

<amx:outputText value="GPS is OFF" id="ot3" rendered="#{pageFlowScope.gpsBean.status==false}"/>

<amx:commandButton text="refresh status" id="cb1" actionListener="#{pageFlowScope.gpsBean.updateStatus}"/>

Update GPSBean with a new function that the button will call.

    public void updateStatus(ActionEvent actionEvent) {

        // Trigger a check of the GPS

       this.isStatus();

    }

This is to simulate that calling GPSBean.status from anywhere within the application will trigger an update of the status.

Deploy and test with location on.  GPS in ON will show as location is available.  The EL expression #{pageFlowScope.gpsBean.status} is used to show this output text and so it will show when the status is true.

7. gps on

Clicking refresh status will trigger an update and as location is available, it remains as GPS is ON. Turn off Location on the device and click the refresh status button. As shown below the message becomes GPS is OFF. This is due to the evaluation of the status being false.

8. gps off

Step 3. Add call to startLocationMonitor

The original intent was to add a page that was able to call location monitoring without throwing an error. Now that the EL expression exists, it is relatively easy to add the call as required.

Drag the DeviceFeature->startLocationMonitor method from the Data Controls onto the welcome.amx page. Select the option to add as a button and then at the prompt enter

true10000 and pageFlowScope.gpsBean.locationUpdated. This tells the in-built location monitoring control to use high accuracy on the results, to update every 10 seconds and the endpoint to send location details to.

9. startLocationMonitor binding

This will add a button to the page, which will allow location monitoring to be triggered. However, our use case is to trigger automatically; there are several options at this point, but for our use case the easiest is to simply update the setGPSStatus method to execute a binding. Paste the code below over the existing method.

   public void setGPSStatus(boolean status) {

        ValueExpression ve = AdfmfJavaUtilities.getValueExpression("#{pageFlowScope.gpsBean.status}", Boolean.class);

        ve.setValue(AdfmfJavaUtilities.getAdfELContext(), status);

        // check whether locationmonitor should now be triggered

        if (status==true){

                 AdfELContext adfELContext = AdfmfJavaUtilities.getAdfELContext();

                 MethodExpression me = AdfmfJavaUtilities.getMethodExpression("#{bindings.startLocationMonitor.execute}", Object.class, new Class[]{});

                 me.invoke(adfELContext, new Object[]{});

        }

    }

To summarise - these actions have added a call to startLocationMonitor which is controlled through the Managed Bean and the status of the GPS being available or not. The final step is to add in the method called when startLocationMonitor passes back an update. Edit the GPSBean and paste the following at the end of the class:

   private double longitude = 0;

    private double latitude = 0;

    private boolean locationDetermined = false;

    public void setLongitude(double longitude) {

        double oldLongitude = this.longitude;

        this.longitude = longitude;

propertyChangeSupport.firePropertyChange("longitude", oldLongitude, longitude);

    }

    public double getLongitude() {

        return longitude;

    }

    public void setLatitude(double latitude) {

        double oldLatitude = this.latitude;

        this.latitude = latitude;

propertyChangeSupport.firePropertyChange("latitude", oldLatitude, latitude);

    }

    public double getLatitude() {

        return latitude;

    }

    public void setLocationDetermined(boolean locationDetermined) {

        boolean oldLocationDetermined = this.locationDetermined;

this.locationDetermined = locationDetermined;

        propertyChangeSupport.firePropertyChange("locationDetermined", oldLocationDetermined, locationDetermined);

    }

    public boolean isLocationDetermined() {

        return locationDetermined;

    }

    public void locationUpdated(Location currentLocation) {

this.setLatitude(currentLocation.getLatitude());

this.setLongitude(currentLocation.getLongitude());

        // track location has been calculated

        if (this.getLatitude()!=0 && this.getLongitude()!=0) {

                     this.setLocationDetermined(true);

        }

    }

The longitude and latitude fields are used to store location details, with locationDetermined being used to track that an actual reference has been found. This could be used later as a way to show or hide certain fields (e..g if you had a distance to nearest store displayed on the page).

Finally, the welcome page needs to be updated to show these details. Go back to welcome.amx and paste the following under the refresh button. The button forstartLocationMonitor can also be removed as we are executing this via the binding trigger.

<amx:panelGroupLayout id="pgl1" layout="vertical" rendered="#{pageFlowScope.gpsBean.locationDetermined}">

      <amx:outputText value="#{pageFlowScope.gpsBean.longitude}" id="ot4"/>

      <amx:outputText value="Latitude #{pageFlowScope.gpsBean.latitude}" id="ot5"/>

</amx:panelGroupLayout>

Deploy the app and launch with Location On.

10. location details

The latitude and longitude will now trigger automatically.

Close the app, turn off Location on the device and launch again. There is no error displayed as location monitoring has not been triggered.

11. location no error

Turn on Location on the device , click refresh status and in a few seconds the location will be displayed.

12. location after refresh

Note: Testing with the location monitor service shows that the interval is largely ignored (at least on Android).  Updates will only fire when it is determined the device has travelled a distance worth notifying. Equally, these updates can come every 0.5 of a second, rather than every 10 seconds.   If you are struggling with getting the function working - go for a walk! You may need to go a couple of hundred metres depending on the network and whether GPS or Wi-Fi location is being used.

Step 4. Extend to ‘real-time’

There is one further extension that could be added to ensure a ‘real time’ GPS status update.  If this is important to the app then the following change to the gps.jsfunctions will check every 5 seconds for the latest status.

function onInvokeSuccess(param) {

    // set timeout to trigger in 5 seconds

    setTimeout(function(){application.checkGPSStatus()}, 5000);

};

Now when the location is turned on or off on the device the update will flow through automatically within 5 seconds.  However, it would be worth assessing the need for this as this will impact on the battery and performance in general.  It is likely that the app only needs to know when location is available (i.e. when it is possible to call location monitor without an error). In this instance it may be better to move the setTimeout call to only occur when location isn’t available. The Javascript for taking this approach is shown below and replaces the current gps.js :-

(function() {

if (!window.application) window.application = {};

   application.checkGPSStatus = function() {

CheckGPS.check(function(){

    //GPS is enabled!

adf.mf.api.invokeMethod("com.rubiconred.test.gpstest.mobile.GPSBean", "setGPSStatus", true , onInvokeSuccess, onFail);

  },

  function(){

    //GPS is disabled!

adf.mf.api.invokeMethod("com.rubiconred.test.gpstest.mobile.GPSBean", "setGPSStatus", false , onInvokeSuccessDisabled, onFail);

  });

  return true;

};

    function onFail() {

     //   alert("It failed");

      // setTimeout to trigger in 5 seconds

setTimeout(function(){application.checkGPSStatus()}, 5000);

    };

    function onInvokeSuccessDisabled(param) {

       // no location, so try again in 5 seconds

setTimeout(function(){application.checkGPSStatus()}, 5000);  

    };

    function onInvokeSuccess(param) {

    };

})();

Summary

This article has shown how you can build a managed bean method that allows the evaluation of the GPS Status and subsequent triggering of location monitoring.  It also shows a quick method for making this real-time (although care should be taken in doing this).

Eliminating Waste from Oracle SOA and BPM Projects

Automate EverythingToday every business is a digital business, where the value that the business delivers to its customers, either through its products and / or services is increasingly derived from the software “systems” that underpins them. The end service delivered to the customer is not performed by a single system; but rather a patchwork of applications, each one performing a particular business function. Oracle Middleware components, such as the Oracle BPM Suite and Oracle SOA Suite, provide the application platform to combine these business apps, like puzzle pieces, into an integrated solution in order to deliver a seamless and unified experience to the customer.

Organizations are in a digital race, where the speed at which IT can reliably deliver new features and innovations is what sets them apart from their competition. Yet in most organizations, IT projects are failing to deliver, either on-time or on-budget.

"Studies have shown that software specialists spend about 40 to 50 percent of their time on avoidable rework rather than on what they call value-added work, which is basically work that's done right the first time..." - Robert Charette, IEEE Spectrum, Sept. 2005

The Need to Eliminate Waste

Removing waste in software development can result in significant cost savings, but more importantly, it can reduce the length of the software development lifecycle, allowing businesses to deliver solutions faster to market. Improving an organisations innovation, competitiveness, and responsiveness in the marketplace.

Within SOA and BPM projects, there are many forms of waste, but some of the biggest causes of waste include:

  • Manual Build and Deployment of Code is Error Prone
  • Late Integration
  • Test Teams Idle
  • Defects Discovered Late in Delivery

Manual Build and Deployment of Code is Error Prone

Manually building and deploying code is a resource intensive and highly error prone process; ask anyone to perform a task tens, hundreds, or even thousands of times and you will find that there are inconsistencies / errors; this is further compounded by the fact that in most organizations there are different individuals and teams performing these tasks in each environment.

An incorrect deployment is one of the most common causes of issues when promoting code into a staging environment. Small errors, such as misconfiguration of a middleware component, can cause issues that are difficult to diagnose and rectify, often requiring many days / weeks of man effort to resolve. As a result, we’re often left with a situation where deployed code fails to work, with the all too familiar expression;

“Well, it worked in my environment!”

These are not one-off issues, but rather a steady drip, drip, drip of issues through all stages of the project lifecycle, resulting in many months of wasted man effort to resolve and lost productivity; leading to missed milestones, project delays and the inevitable cost blow out.

Late Integration

Since manual builds are so time consuming, stressful, and error prone, the natural tendency in a project is to minimize the number of releases into each staging environment, and delay these until late in the project when the code in theory will be more stable.

Software components implemented in isolation are full of assumptions about the other components with which it will be integrated. Leaving integration towards the end is a high risk strategy, since issues with core architecture or design patterns, for example, may not be exposed until a project is almost completed.

This is especially the case for Oracle SOA and BPM projects, which involve integrating multiple systems together; it is a common mistake for all parties to agree on the interfaces between the systems and then go off and code independently (often for months), with a false sense of security that this is sufficient to avoid the worst issues when it comes to integrate these pieces together.

System integration and testing is then carried out towards the end of the project, just prior to going into User Acceptance Testing (UAT). Correcting invalid assumptions discovered at this stage in the lifecycle can result in significant time delays, be very costly and may even require significant areas of the code base to be re-written.

Test Teams Idle

One of the biggest wastes in software development is time spent waiting for things to happen. An area where this happens all too regularly is testing.

As previously mentioned, System Integration Testing (SIT) is often pushed back until late in the project, with developers cutting code up until the day before SIT is due to begin. At the eleventh hour, the build is run and the code deployed into the SIT environment, ready for testing to begin.

Unfortunately, for reasons already highlighted, the first delivery into SIT rarely goes smoothly, often requiring weeks or even months of elapsed effort by the development team to get the application to a state where testing can be performed. During this time, the test team is forced to stand by idle.

Once the first release into SIT has been successfully completed, it is not the end of the issue. Since manual builds and deployments are error prone, it means that process of deploying each subsequent release so that it is ready and fit for testing can be arduous. The deployed code will often fail basic “smoke” tests and require extensive troubleshooting and fixing before it’s ready to be tested, again with the test team left helpless on the sidelines.

Apart from wasting significant amounts of the test team’s time, the time spent troubleshooting the release is wasting developer time that should be spent on writing the code that delivers business value.

Defects Discovered Late in Delivery

Test teams are caught between a rock and a hard place; with each test cycle starting late for reasons outside of their control, yet the milestones for completing each round of testing remain fixed due to project pressures. Squeezing the testing into a reduced timeframe, means the overall coverage and quality of testing is compromised, resulting in more defects going undetected in each round of testing.

The net effect is that defects are discovered later in the development cycle, or worse, make it into production. It is well known that the longer these defects remain undiscovered, the more effort it takes to troubleshoot and fix, resulting in significant project delays.

The business is frustrated when “development complete” code can’t be released, or unreliable code not fit for purpose is pushed into production – leading to the inevitable fallout and fire-fighting.

Continuous Delivery for Oracle BPM and SOA

The goal of continuous delivery is to help software development teams drive waste out of their process by simultaneously automating the process of software delivery and reducing the batch size of their work. This allows organizations to rapidly, reliably, and repeatedly deliver software enhancements faster, with less risk and less cost.

Applying Continuous Delivery in the development of Oracle Middleware projects can deliver significant reductions in development time and costs.

Download White Paper

In subsequent posts I will go into further details on how we can apply continuous delivery to Oracle BPM and SOA projects. Click here download a White paper on Best Practice for Implementing Continuous Delivery for Oracle Middleware.