ITBrief: Rubicon Red on track for bumper year

ITBrief shares the details of several major announcements by Rubicon Red, that will help the company grow to new heights, putting it on track for a bumper year.

Rubicon Red has achieved a major go-live at an Australian Tier 1 financial institution, has achieved Oracle BPM 12c Specialisation and will feature as a sponsor of the upcoming inaugural Oracle Cloud Summit in Melbourne.

The fintech ‘go-live’ will transform the lending fulfilment process for both fulfilment staff and 2000-5000 bankers across the country. The new system will allow banking customers to potentially experience the green light on loan approvals up to three times faster, and with better customer service.

Rubicon Red has also achieved Oracle PartnerNetwork Specialisation for Oracle Business Process Management Suite 12c. This suite was the major delivery method that enabled the new banking system.

“As Oracle BPM and SOA Specialists, maintaining our staff expertise and ongoing enablement has always been important for Rubicon Red and we recognise the added value of specialisation for us and our customers,” comments Rubicon Red’s CEO John Deeb.

“With our focus now on helping our customers with their digital transformation and transition to the Cloud, we are also investing heavily in building and maintaining our expertise in Oracle Cloud,” Deeb says.

Read the full article here.

Running WebLogic 12.2.1 on Docker

It's a sign of the times when Oracle mention that the latest version (12.2.1) of their Oracle WebLogic Java EE Application Server was released last week on Docker shortly before it was released as a plain old executable installer.

Image source is http://nalacat.com/

But before you get too excited, let me just say one thing. If you were hoping to run something like this:

docker run -d oracle/weblogic:12.2.1

...and then, voilà! ...you might be a tad bit underwhelmed.

Now don't get me wrong, you can run WebLogic 12.2.1 on Docker containers and it works well, some might say very well. All of the WebLogic Server features are available in the container AND... you've got the added portability and flexibility that Docker brings. There is however, dare I say, more hoops to jump through in the initial setup then you've come to expect from the Docker experience. Let me explain.

As anyone who is familiar with Docker knows, running a container is as simple as "docker run ". Basically, this one command will search locally for the docker machine image and if not found, it will securely pull it down from the internet. Then, it will start it up in seconds (or less), ready for accepting requests. Simple right?!

The user experience built by Docker around the existing Linux container technology is basically one of the many reasons Docker containers are becoming increasingly popular. But more so, if DevOps and Continuous Delivery is about eliminating wasteful activities in order to innovate faster, Docker-based containers is a good example of this. When it comes to Docker, the value-added time seems to pretty close to the elapsed-time.

As I mentioned earlier, Docker will run any container for you with a single command. Or, if you're the rock-dwelling type that doesn't believe me, go try it for yourself. Execute "docker run tomcat:8.0" and then in seconds after the initial download, the Tomcat web server will be running and accessible from a browser. Pretty cool huh?

With Oracle WebLogic Server on Docker theres a bit more of 'dem hoops I spoke about earlier.

 

  1. Sign up for an Oracle account at their website (if you don't already have one)
  2. Download the JDK and Oracle WebLogic Server installers from their website. You'll need to be logged in with your Oracle account.
  3. Install Git then git clone from https://github.com/oracle/docker.git https://github.com/oracle/docker-images.gitEdit: As per Bruno's comment, the Docker repository URL was changed and if you are really against Git for some reason you can download the whole source as a zip file instead from https://github.com/oracle/docker-images/archive/master.zip

  4. Now, place the previously downloaded binaries in the following locations:
    • OracleWebLogic/dockerfiles/12.2.1/fmw_12.2.1.0.0_wls_quick.jar (you'll have to extract this from zip file first)
    • OracleWebLogic/dockerfiles/12.2.1/jdk-8u60-linux-x64.rpm (If you're in any way inclined, like me and want to use the latest JDK, you'll need to hack some of the files to point to this different file and even change the MD5sum as it validates against that.)
  5. Build the Docker image from scratch. The script does a "docker build" under the covers but also does some validation of the installation binaries against the md5sum records.
    • sudo sh buildDockerImage.sh -v 12.2.1 -d
    • Note: Checksums for container images are already built into Docker but that's not quite the "Oracle way" because they require you to download install files and build your own images.
  6. Now it's time to create a second image for our running instance based on the previous WebLogic 12.2.1 binary Docker image that we built. Simply navigate to OracleWebLogic/samples/1221-domain directory and rename Dockerfile.empty or Dockerfile.supplementaldomain to Dockerfile. The reason Oracle wants you to rename it is because they are simply providing a template. While it does actually work out of the box, I would suspect they are making the assumption that you may want to customise your platform; like adding a data source or some JMS messaging configuration etc.
  7. Run the following from the sample directory as in the previous step to build our WebLogic Server instance Docker image. This will perform all the automated steps defined in the Dockerfile.
    • sudo docker build -t oracle/weblogic:12.2.1 .
  8. Almost there... Now it's time to boot up the container.
    • sudo docker run -d --name=wlsadmin oracle/weblogic:12.2.1
  9. Give it some seconds (or minutes) and then there you have it. Oracle WebLogic Server 12.2.1 running in a container with the Admin Console ready to use.
    • To get the container IP address run "docker inspect wlsadmin | grep IPA". You can use this to then access the WebLogic console on http://<container IP>:7001/console
    • If you're wondering about WebLogic Managed Servers and how to add them, there are some details at https://github.com/oracle/docker/tree/master/OracleWebLogic but I will also be creating another blog on that soon. At least, that is what I hope.

<

p>So what's the lesson in all of this? Is it that the WebLogic and Docker relationship is a match made in heaven? Not quite. But, it's a step in the right direction. I'm sure many new Docker users will come out of the Oracle woodworks just because of Oracle's decision to start "certifying" some of their products on Docker.

Oracle WebLogic on Docker is great. But before you can get running there is 9 coarse grained steps and probably 3 times that when you factor in the fine grained steps (basically, 27 steps). Why did Oracle do this, when it could have been one step? Well I don't know for sure... but my jocular take is that they're implicitly inspiring a new powerful metric to DevOps and of course, pioneers aren't going to like it... This new metric will help us to factor in the unfortunate time that will be lost signing End User Licence Agreements (EULA). Or, as I like to call it:

Legally-binding Added Time Extensions or L.A.T.E. as it so makes you.

As we brace for the new wave of enterprise container users. Perhaps we need to think about what a new enterprise DevOps pipeline might look like. Maybe:

  1. Sign the EULA (manually)
  2. Login securely and download the software. (manually)
  3. Mess around with Git to build your vendor-endorsed application server container image from scratch. (manually)
  4. Profit!

All jokes aside, I predict with the recent announcement of containers coming to Oracle Cloud - You can bet there will be a publicly available Docker registry for Oracle software coming soon which would turn 27 steps to essentially one. But don't expect it to be the public Docker Hub registry. You know how it is...

Happy Dockering! And please leave any comments if you need assistance troubleshooting Docker running on WebLogic 12.2.1.

Coming soon is a post on MyST and it's tight integration with Docker for development/test and troubleshooting production safely in isolation. Look forward to sharing with you.

Oracle BPM Parallel Gateway with Multi-Instance processing and bpelx:copyList function

The Oracle BPM parallel gateway is very useful where the business process must perform multiple tasks in parallel.It can be combined with Multi-Instance loop markers to run a subprocess for each of the element on a set(or)array of data. But when combining these two features,there is a implicit rule with regards to the usage of copyList function and this blog aims at documenting it.

The sample use case is as follows:-

Consider the example of an Order entity with multiple Order Lines. The status of each order line determines the processing that needs to be done and once all the processing is complete, we require to merge the final outcome into the Order entity. The implementation pattern we will follow is to use the parallel gateway connecting different multi-instance sub-process each filtering their respective order lines by status and do the processing. Once everything is done they will converge at the outgoing parallel gateway.

Schema Definition - Order

   <!--?xml version= '1.0' encoding= 'UTF-8' ?> xmlns="http://www.w3.org/2001/XMLSchema" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:qt="http://www.mycompany.com/ns/order" targetNamespace="http://www.mycompany.com/ns/order" elementFormDefault="qualified"> <annotation> xml:lang="en">Order </annotation> OrderType"/> orderLine" type="qt:OrderLineType"/> <complexType name="OrderType"> <sequence> orderID" type="string"/> customerName" type="string"/> customerAddress" type="string"/> orderLine" maxOccurs="unbounded" minOccurs="0" type="qt:OrderLineType"/> </sequence> <!--complexType> <xsd:complexType name="OrderLineType"> <xsd:sequence> <xsd:element name="orderLineID" maxOccurs="1" type="xsd:string"/> <xsd:element name="itemId" maxOccurs="1" type="xsd:string"/> <xsd:element name="quantity" maxOccurs="1" type="xsd:int"/> <xsd:element name="status" maxOccurs="1" type="xsd:string"/> </xsd:sequence> </xsd:complexType> </schema>  

Lets create a simple business process to model the above mentioned usecase:

bpm_issue1

Add the 'order' element as the input to the bpmn process

bpm_issue2

Complete the association between the input data and the process data object -'order':

bpm_issue3

Below is the modeled business process for the use case. The process takes two parallel gateway paths to process approved & pending orders separately. Once the processing is complete, the outcome is merged to the order entity.

bpm_issue4

bpm_issue5

Configure each sub-process to filter only their respective order lines. An example for Pending Orders is shown below:

bpm_issue6

XPATH expression to filter the pending order lines is shown below(we need to put the expression for both input & output loop data output) -

bpm_issue7

Do the above for approved order lines and deploy the business process.

Let's test the business process using two test cases :

Testcase 1 : Contains the payload with Order lines sorted by 'status'

Testcase 2 : Contains the payload with Order lines unsorted by 'status'

Testcase 1:

  1. <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
  2.  <soap:Body>
  3.   <ns1:start
  4.    xmlns:ns1="http://xmlns.oracle.com/bpmn/bpmnProcess/MultiInstanceProcess"
  5.    xmlns:ns2="http://www.mycompany.com/ns/order">
  6.    <ns2:order>
  7.     <ns2:orderID>1</ns2:orderID>
  8.     <ns2:customerName>Ganesh</ns2:customerName>
  9.     <ns2:customerAddress>Melbourne</ns2:customerAddress>
  10.     <ns2:orderLine>
  11.      <ns2:orderLineID>100</ns2:orderLineID>
  12.      <ns2:itemId>10</ns2:itemId>
  13.      <ns2:quantity>2</ns2:quantity>
  14.      <ns2:status>APPROVED</ns2:status>
  15.     </ns2:orderLine>
  16.     <ns2:orderLine>
  17.      <ns2:orderLineID>101</ns2:orderLineID>
  18.      <ns2:itemId>33</ns2:itemId>
  19.      <ns2:quantity>1</ns2:quantity>
  20.      <ns2:status>APPROVED</ns2:status>
  21.     </ns2:orderLine>
  22.     <ns2:orderLine>
  23.      <ns2:orderLineID>102</ns2:orderLineID>
  24.      <ns2:itemId>34</ns2:itemId>
  25.      <ns2:quantity>10</ns2:quantity>
  26.      <ns2:status>PENDING</ns2:status>
  27.     </ns2:orderLine>
  28.    </ns2:order>
  29.   </ns1:start>
  30.  </soap:Body>
  31. </soap:Envelope>

The process completes successfully and below shows the audit trail for that.

bpm_issue8

Now let's try the other test case:

Testcase 2 :

  1. <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
  2.  <soap:Body>
  3.   <ns1:start
  4.    xmlns:ns1="http://xmlns.oracle.com/bpmn/bpmnProcess/MultiInstanceProcess"
  5.    xmlns:ns2="http://www.mycompany.com/ns/order">
  6.    <ns2:order>
  7.     <ns2:orderID>1</ns2:orderID>
  8.     <ns2:customerName>Ganesh</ns2:customerName>
  9.     <ns2:customerAddress>Melbourne</ns2:customerAddress>
  10.     <ns2:orderLine>
  11.      <ns2:orderLineID>101</ns2:orderLineID>
  12.      <ns2:itemId>33</ns2:itemId>
  13.      <ns2:quantity>1</ns2:quantity>
  14.      <ns2:status>APPROVED</ns2:status>
  15.     </ns2:orderLine>
  16.     <ns2:orderLine>
  17.      <ns2:orderLineID>102</ns2:orderLineID>
  18.      <ns2:itemId>34</ns2:itemId>
  19.      <ns2:quantity>10</ns2:quantity>
  20.      <ns2:status>PENDING</ns2:status>
  21.     </ns2:orderLine>
  22.     <ns2:orderLine>
  23.      <ns2:orderLineID>100</ns2:orderLineID>
  24.      <ns2:itemId>10</ns2:itemId>
  25.      <ns2:quantity>2</ns2:quantity>
  26.      <ns2:status>APPROVED</ns2:status>
  27.     </ns2:orderLine>
  28.    </ns2:order>
  29.   </ns1:start>
  30.  </soap:Body>
  31. </soap:Envelope>

The process fails with the below error:

bpm_issue9

Error Message: {http://docs.oasis-open.org/wsbpel/2.0/process/executable}mismatchedAssignmentFailure

Fault ID

default/MultiInstanceProcess!1.0*soa_00de7c84-1a46-4324-a7f2-2071ef7a680f/MultiInstanceProcess/830003-ACT10651136503013MultiInstanceBlock_ACT10651136503013_End-ACT10651136503013MultiInstanceBlock_ACT10651136503013.3-2

Fault Time

Non Recoverable System Fault :

<bpelFault><faultType>0</faultType><mismatchedAssignmentFailure xmlns="http://docs.oasis-open.org/wsbpel/2.0/process/executable"></mismatchedAssignmentFailure></bpelFault>

The issue is because BPM internally uses an function similar to the bpelx:copyList to copy the nodes and unless the list used to filter the status are contiguous, the above pattern will not work. To workaround the issue, we need to explicitly sort the order lines by status before passing it through the parallel gateway.

From documentation -

http://docs.oracle.com/cd/E23943_01/dev.1111/e10224/bp_manipdoc.htm#CIHICJGH

6.14.6.1 bpelx:copyList in BPEL 1.1

Example 6-65 provides an example of bpelx:copyList in a BPEL project that supports BPEL version 1.1.

Example 6-65 bpelx:copyList Extension in BPEL 1.1

<bpel:assign>

<bpelx:copyList>

<bpelx:from ... />

<bpelx:to ... />

</bpelx:copyList>

</bpel:assign>

The from-spec query can yield a list of either all attribute nodes or all element nodes. The to-spec query can yield a list of L-value nodes: either all attribute nodes or all element nodes.

All the element nodes returned by the to-spec query must have the same parent element. If the to-spec query returns a list of element nodes, all element nodes must be contiguous.

Monitoring DB Growth for FMW

While working on the customer site, I am sure most of us would have encountered the following questions(or more) from the clients (especially the DBA team):

  • How to track the table space growth for fusion middleware products and SOA-INFRA in particular?
  • When a new business process is deployed in to an environment, how to determine the  amount of table space its instances will require over time?
  • What will be the impact of increasing the SOA audit level on the table space growth ?

Its very difficult to answer the above questions upfront, but if we implement a monitoring solution on the table space growth, we could potentially be able to answer them. This blog aims at providing a solution that monitors the database growth both at the table and tablespace level.

This solution can be scheduled at a  weekly/Monthly frequency on production to monitor how the table space is growing (or) can be used to capture the snapshot before & after a load test to understand how much growth the table space has undergone. This could act as a vital statistic for increasing the table space whenever a new business process is rolled out to production.

The solution contains two tables to hold the db growth statistics :

  • SCH_TBL_SIZE_STATS_HDR - Captures the table space level growth statistics
  • SCH_TBL_SIZE_STATS_DTL - Captures the table level growth statistics

The below script creates the above mentioned tables and ideally requires to be created under a schema that has the DBA privileges to monitor any required tablespace:

  1. /**
  2. #####################################################################
  3. Table Spec - SCH_TBL_SIZE_STATS_HDR & SCH_TBL_SIZE_STATS_DTL
  4. #####################################################################
  5. @schema_table_size_stats_tbl_script.sql
  6. Tables to contain the statistics regarding the tablespace size growth by schema.
  7. Copyright Rubicon Red Pty Ltd
  8. Author - gkrishna
  9. **/
  10. DROP TABLE SCH_TBL_SIZE_STATS_DTL
  11. /
  12. DROP TABLE SCH_TBL_SIZE_STATS_HDR
  13. /
  14. DROP SEQUENCE SCH_TBL_SIZE_STATS_DTL_SEQ
  15. /
  16. DROP SEQUENCE SCH_TBL_SIZE_STATS_HDR_SEQ
  17. /
  18. CREATE SEQUENCE SCH_TBL_SIZE_STATS_HDR_SEQ
  19.   START WITH 1 INCREMENT BY 1 NOCACHE;
  20. /
  21. CREATE SEQUENCE SCH_TBL_SIZE_STATS_DTL_SEQ START WITH 1 INCREMENT BY 1 NOCACHE;
  22. /
  23. CREATE
  24.     TABLE SCH_TBL_SIZE_STATS_HDR
  25.     (
  26.       SCH_TBL_SIZE_STATS_HDR_ID NUMBER(18) PRIMARY KEY,
  27.       OWNER_SCHEMA              VARCHAR2(30) NOT NULL,
  28.       RUN_DATE                  DATE NOT NULL,
  29.       MB_ALLOCATED              NUMBER NOT NULL,
  30.       MB_FREE                   NUMBER NOT NULL,
  31.       MB_USED                   NUMBER NOT NULL,
  32.       PCT_FREE                  NUMBER NOT NULL,
  33.       PCT_USED                  NUMBER NOT NULL
  34.     );
  35.   /
  36. CREATE
  37.     TABLE SCH_TBL_SIZE_STATS_DTL
  38.     (
  39.       SCH_TBL_SIZE_STATS_DTL_ID NUMBER(18) PRIMARY KEY,
  40.       SCH_TBL_SIZE_STATS_HDR_ID NUMBER(18) NOT NULL,
  41.       TABLE_NAME                VARCHAR2(30) NOT NULL,
  42.       NO_OF_ROWS                NUMBER(15) NOT NULL,
  43.       TABLE_SIZE_IN_MB          NUMBER,
  44.       CONSTRAINT SCH_TBL_SIZE_STATS_HDR_FK FOREIGN KEY(
  45.       SCH_TBL_SIZE_STATS_HDR_ID) REFERENCES SCH_TBL_SIZE_STATS_HDR(
  46.       SCH_TBL_SIZE_STATS_HDR_ID)
  47.     );
  48.   /

The below package contains the procedure 'GATHER_SCHEMA_TABLE_SIZE' to gather the table space growth statistics and it needs to be created/compiled on the same schema as the above tables:

Package Specification

  1. --Package Specification
  2. CREATE OR REPLACE PACKAGE SCH_TBL_SIZE_STATS_PKG
  3. AS
  4.   --Type to hold the list of schema for which the statistics needs to be calculated.
  5.   TYPE SCHEMA_LIST IS TABLE OF VARCHAR2(30);
  6.   -- Procedure to gather the schema statistics
  7.   PROCEDURE GATHER_SCHEMA_TABLE_SIZE(
  8.       P_SCHEMA_LIST IN SCH_TBL_SIZE_STATS_PKG.SCHEMA_LIST);
  9.   -- Procedure to clean up the stats before re-runs for the same day
  10.   PROCEDURE CLEANUP_STATS(
  11.       P_SCHEMA_NAME IN VARCHAR2 ,
  12.       P_RUN_DATE    IN DATE);
  13. END SCH_TBL_SIZE_STATS_PKG;
  14. /

Package Body

  1. --Package Body
  2. CREATE OR REPLACE PACKAGE BODY SCH_TBL_SIZE_STATS_PKG
  3. AS
  4. -- Procedure to gather the schema statistics
  5. PROCEDURE GATHER_SCHEMA_TABLE_SIZE(
  6.     P_SCHEMA_LIST IN SCH_TBL_SIZE_STATS_PKG.SCHEMA_LIST)
  7. IS
  8.   CURSOR LIST_SCHEMA_TABLES_CUR(P_OWNER VARCHAR2)
  9.   IS
  10.     SELECT
  11.       OBJECT_ID,
  12.       OBJECT_NAME
  13.     FROM
  14.       DBA_OBJECTS
  15.     WHERE
  16.       OBJECT_TYPE = 'TABLE'
  17.     AND OWNER     = P_OWNER
  18.     AND STATUS    = 'VALID'
  19.     AND GENERATED = 'N'
  20.     AND OBJECT_NAME NOT LIKE '%$%'; --System tables.
  21.   l_index  NUMBER;
  22.   l_hdr_id       NUMBER(18);
  23.   l_dtl_id       NUMBER(18);
  24.   l_schema_found VARCHAR2(1);
  25. BEGIN
  26.   FOR l_index IN P_SCHEMA_LIST.FIRST .. P_SCHEMA_LIST.LAST
  27.   LOOP
  28.     -- check to make sure the schema exists otherwise just continue with the
  29.     -- rest of the schemas in the list.
  30.     BEGIN
  31.       SELECT
  32.         'Y'
  33.       INTO
  34.         l_schema_found
  35.       FROM
  36.         DBA_USERS
  37.       WHERE
  38.         USERNAME = P_SCHEMA_LIST(l_index);
  39.     EXCEPTION
  40.     WHEN NO_DATA_FOUND THEN
  41.       dbms_output.put_line('Invalid Schema'||P_SCHEMA_LIST(l_index));
  42.       CONTINUE;
  43.     END;
  44.     --clean up the statistics if it already exists for the day.
  45.     CLEANUP_STATS(P_SCHEMA_LIST(l_index),SYSDATE);
  46.     -- getting the primary key value for the header table.
  47.     SELECT
  48.       SCH_TBL_SIZE_STATS_HDR_SEQ.NEXTVAL
  49.     INTO
  50.       l_hdr_id
  51.     FROM
  52.       DUAL;
  53.     -- populating the header table with schema level details
  54.     INSERT
  55.     INTO
  56.       SCH_TBL_SIZE_STATS_HDR
  57.       (
  58.         SCH_TBL_SIZE_STATS_HDR_ID,
  59.         OWNER_SCHEMA,
  60.         RUN_DATE,
  61.         MB_ALLOCATED,
  62.         MB_FREE,
  63.         MB_USED,
  64.         PCT_FREE,
  65.         PCT_USED
  66.       )
  67.     SELECT
  68.       *
  69.     FROM
  70.       (
  71.         SELECT
  72.           l_hdr_id,
  73.           a.tablespace_name OWNER_SCHEMA,
  74.           SYSDATE RUN_DATE,
  75.           ROUND(a.bytes /1048576,2) MB_ALLOCATED,
  76.           ROUND(b.bytes /1048576,2) MB_FREE ,
  77.           ROUND((a.bytes-b.bytes)/1048576,2) MB_USED,
  78.           ROUND(b.bytes /a.bytes * 100,2) PCT_FREE,
  79.           ROUND((a.bytes-b.bytes)/a.bytes,2) * 100 PCT_USED
  80.         FROM
  81.           (
  82.             SELECT
  83.               tablespace_name,
  84.               SUM(a.bytes) bytes
  85.             FROM
  86.               DBA_DATA_FILES a
  87.             GROUP BY
  88.               tablespace_name
  89.           )
  90.           a,
  91.           (
  92.             SELECT
  93.               a.tablespace_name,
  94.               NVL(SUM(b.bytes),0) bytes
  95.             FROM
  96.              DBA_DATA_FILES a,
  97.              DBA_FREE_SPACE b
  98.             WHERE
  99.               a.tablespace_name = b.tablespace_name (+)
  100.             AND a.file_id       = b.file_id (+)
  101.             GROUP BY
  102.               a.tablespace_name
  103.           )
  104.           b,
  105.           DBA_TABLESPACES c
  106.         WHERE
  107.           a.tablespace_name   = b.tablespace_name(+)
  108.         AND a.tablespace_name = c.tablespace_name
  109.         AND a.tablespace_name = P_SCHEMA_LIST(l_index)
  110.         ORDER BY
  111.           a.tablespace_name
  112.       );
  113.     -- Now find all the non-system tables in the schema and then populate the
  114.     -- statistics
  115.     -- to the detail table
  116.     FOR tab IN LIST_SCHEMA_TABLES_CUR(P_SCHEMA_LIST(l_index))
  117.     LOOP
  118.       -- make sure we compute the statistics first before calculating the table
  119.       -- size.
  120.       EXECUTE immediate 'ANALYZE TABLE '||P_SCHEMA_LIST(l_index)||'.'||tab.OBJECT_NAME||
  121.       ' COMPUTE STATISTICS';
  122.       -- getting the primary key value for the detail table.
  123.       SELECT
  124.         SCH_TBL_SIZE_STATS_DTL_SEQ.NEXTVAL
  125.       INTO
  126.         l_dtl_id
  127.       FROM
  128.         DUAL;
  129.       -- populating the statistics for each table.
  130.       INSERT
  131.       INTO
  132.         SCH_TBL_SIZE_STATS_DTL
  133.         (
  134.           SCH_TBL_SIZE_STATS_DTL_ID,
  135.           SCH_TBL_SIZE_STATS_HDR_ID,
  136.           TABLE_NAME,
  137.           NO_OF_ROWS,
  138.           TABLE_SIZE_IN_MB
  139.         )
  140.       SELECT
  141.         l_dtl_id,
  142.         l_hdr_id,
  143.         table_name,
  144.         NVL(num_rows,0) ,
  145.         (
  146.           SELECT
  147.             SUM(bytes_in_mb) AS total_size_in_mb
  148.           FROM
  149.             (
  150.               SELECT
  151.                 dbs.bytes/(1024)/(1024) AS bytes_in_mb
  152.               FROM
  153.                 dba_segments dbs,
  154.                 dba_lobs dbl
  155.               WHERE
  156.                 dbl.table_name    =tab.OBJECT_NAME
  157.               AND dbs.segment_name=dbl.segment_name
  158.               UNION
  159.               SELECT
  160.                 dbs.bytes/(1024)/(1024) AS bytes_in_mb
  161.               FROM
  162.                 dba_segments dbs,
  163.                 dba_indexes dbi
  164.               WHERE
  165.                 dbi.table_name    =tab.OBJECT_NAME
  166.               AND dbs.segment_name=dbi.index_name
  167.               UNION
  168.               SELECT
  169.                 dbs.bytes/(1024)/(1024) AS bytes_in_mb
  170.               FROM
  171.                 dba_segments dbs,
  172.                 dba_tables dbt
  173.               WHERE
  174.                 dbt.table_name    =tab.OBJECT_NAME
  175.               AND dbs.segment_name=dbt.table_name
  176.             )
  177.             tbl_size
  178.         ) AS total_size_in_mb
  179.       FROM
  180.         dba_tables tbl
  181.       WHERE
  182.         tbl.table_name=tab.OBJECT_NAME
  183.       AND tbl.owner   =P_SCHEMA_LIST(l_index);
  184.       -- May be we need a better strategy here for commit.. for now
  185.       -- this should be ok
  186.       COMMIT;
  187.     END LOOP;
  188.   END LOOP;
  189. END GATHER_SCHEMA_TABLE_SIZE;
  190. -- Procedure to clean up the stats before re-runs for the same day
  191. PROCEDURE CLEANUP_STATS(
  192.     P_SCHEMA_NAME IN VARCHAR2 ,
  193.     P_RUN_DATE    IN DATE)
  194. IS
  195. BEGIN
  196.   --deleting the detail table statistics for the given schema
  197.   DELETE
  198.   FROM
  199.     SCH_TBL_SIZE_STATS_DTL
  200.   WHERE
  201.     SCH_TBL_SIZE_STATS_HDR_ID IN
  202.     (
  203.       SELECT
  204.         SCH_TBL_SIZE_STATS_HDR_ID
  205.       FROM
  206.         SCH_TBL_SIZE_STATS_HDR
  207.       WHERE
  208.         TRUNC(RUN_DATE) = TRUNC(P_RUN_DATE)
  209.       AND OWNER_SCHEMA  = P_SCHEMA_NAME
  210.     );
  211.   --deleting the header table statistics for the given schema
  212.   DELETE
  213.   FROM
  214.     SCH_TBL_SIZE_STATS_HDR
  215.   WHERE
  216.         TRUNC(RUN_DATE) = TRUNC(P_RUN_DATE)
  217.     AND OWNER_SCHEMA  = P_SCHEMA_NAME;
  218.     COMMIT;
  219. END CLEANUP_STATS;
  220. END SCH_TBL_SIZE_STATS_PKG;
  221. /

Now lets execute the procedure to analyze the statistics for SOA-INFRA schema:

  1. DECLARE
  2. l_schema_list SCH_TBL_SIZE_STATS_PKG.SCHEMA_LIST;
  3. BEGIN
  4.   l_schema_list := SCH_TBL_SIZE_STATS_PKG.SCHEMA_LIST('DEV_SOAINFRA');
  5.   SCH_TBL_SIZE_STATS_PKG.GATHER_SCHEMA_TABLE_SIZE(l_schema_list);
  6. END;
  7. /

Once the script finishes, lets check the generated data:

  1. SELECT * from SCH_TBL_SIZE_STATS_HDR

FMW DB Growth statistics tablespace level

The above result set shows the statistics for 25th & 26th September 2014 and it can be seen that there is a growth of 20 MB over all in the schema. Querying the detail table should give us the statistics at the table level:

  1. SELECT * from SCH_TBL_SIZE_STATS_DTL WHERE SCH_TBL_SIZE_STATS_HDR_ID = 9 ORDER BY TABLE_SIZE_IN_MB DESC

FMW DB Growth statistics table level

Above is the snapshot of table-level growth for 26th September and simple queries can be written to calculate the difference based on the previous run dates to identify how much the table has grown in rows/size.

SOA Suite 12c Quick Start

Lets get started!

In the below tutorial, we will show you just how easy it is to get a "basic" SOA Suite 12c environment up, running and ready for development to commence!

Oracle announced last month, the release of Oracle SOA Suite 12c which marks a major step forward in supporting “industrial” SOA, and offer the industry’s most highly integrated middleware platform. With the rapid adoption of Cloud, Mobile and Internet of Things, the need for a robust, proven and standards based SOA platform has become central to an organisations ability to deliver on these key initiatives.

SOA Suite 12c offers a significantly improved development experience. One such example of this is the "quick start" installer which contains everything required to get started developing for SOA Suite 12c in under 30 minutes!

So what is in included in the "Quick Start" installer?

  • WebLogic Application Server
  • Coherence  In-Memory Grid
  • SOA Suite including:
    • Oracle BPEL
    • Oracle Mediator
    • Oracle Human Workflow
    • Oracle Service Bus
    • Oracle Rules
    • Technology Adapters
  • Enterprise Manager Fusion Middleware Control
  • Lightweight In-Memory Database
  • ...and of course, JDeveloper IDE
    • with all of the mandatory plugins pre-installed!

Please Note: B2B, Healthcare & Oracle Event Processing are separate downloadable add-ons to SOA Suite.

What about Enterprise Deployment?

It is important to note that the SOA Suite 12c "quick start" installer will not give you a production ready environment. It is merely designed to meet the development or evaluation use cases.

When it comes to Enterprise Deployment of SOA Suite 12c, there are a number of important steps to take and the SOA Suite 12c Enterprise Deployment Guide (EDG) is the best place to start!

The EDG can be a daunting guide at first with some 250+ pages but there is no need to be afraid! In a series of posts Rubicon Red will guide you in your journey to SOA Suite 12c production. We'll show you how to ensure reliable, repeatable and consistent environment delivery that meets the requirements outlined in the EDG. With the Rubicon Red MyST declarative-based provision tool, you'll see how a highly-available, secure and robust environment can be realised without the pain, sorrow and despair... With SOA Suite 12c and MyST, the Journey is the Reward.

Case Study: Premium Wine Brands – Oracle Fusion Middleware

Premium Wine Brands Pty Ltd Leverages SOA for Process Automation and Application Integration; increasing Return on Investment while Reducing Cost of Ownership

Premium Wine Brands and implementation partner Rubicon Red were recently recognized for their innovative use of Oracle Fusion Middleware at the Oracle Innovation Award ceremony held at Oracle Open World 2010 in San Francisco.

By using Oracle SOA Suite 11g on Oracle WebLogic Server 11g, Premium Wine Brands has been able to build an environment leveraging service reuse (via a Central Service Repository) to reduce complexity and total cost of ownership. Oracle Service Bus 11g is central to this aspect of the architecture.

Click here to read the full article

Case Study: Powercor and Oracle Directory Services

Rubicon Red enables Powercor to simplify their SOA deployment and reduce time to market with Oracle Directory Services

Powercor is a leading electricity generating company in Australia.

To make it easier to communicate with electricity end-users in particular as Powercor updates the electrical meters, they deployed a new IT infrastructure based around Oracle technologies.  This includes Oracle SOA Suite and Oracle Identity Management.

The foundation of this deployment is Oracle Directory Services which is used to provide authentication, user contact information and makes it easy for applications to connect to the data via standard interfaces such as LDAP and DSML.

This has made it simpler for Powercor to deploy their new infrastructure and reduced time to market.

This case study shows how Rubicon Red used Oracle Directory Services to enable Powercor to simplify their SOA deployment and reduce time to market.

Click here to view the Oracle Whitepaper

Unified Workflow with the Oracle BPM Suite

Rubicon Red oneSpot provides an extension to the Oracle SOA and BPM Suite 11gR1; without copying, presents users with a single unified view of all their tasks across multiple workflow platforms.

Business processes represent sets of logically organized activities spanning multiple IT systems, departments, and roles. Some activities are automated and performed by machines, whereas others are manual and performed by people. In a typical enterprise, these processes are fragmented and buried across multiple applications (such as Oracle eBusiness Suite, SAP, Peoplesoft, Siebel and custom apps), making processes rigid and hard to change. Oracle BPM and SOA Suite 11gR1 is the next generation solution that allows businesses to define and implement end to end business processes that integrate these process fragments together, allowing processes to be easily reconfigured to meet the evolving requirements of the business Human Tasks (or workflows) are a fundamental part of any business process as they provide the channel for communication between business processes and there participants. Whilst Oracle BPM provides a comprehensive workflow solution, the process fragments that execute in the existing IT systems, often require users to perform certain tasks like acting on requests pending approval or flagging a manual fulfillment as complete. Each of these applications includes its own workflow engine, which creates and manages tasks for the part of the business process that it covers. As a result, business users regularly have to log into multiple applications, to determine which tasks the have assigned to them at any one time. This means it’s impossible for a user to get a single view of all their tasks, or allow there manager to easily monitor or managing outstanding tasks for all their employees. Rubicon Red oneSpot an extension to the Oracle SOA and BPM Suite, presents users with a single unified view of all their tasks in one spot. This is achieved through the use of a Virtual Task Repository (VTR); which is designed to be connected to multiple workflow task stores and present users with a single integrated view of all tasks assigned to them. Virtual Task Repository Form within oneSpot a user can open a "virtual" task and work on it as if it were a local task, with all actions / updates performed on the task executed againt the real task in the source workflow application. All tasks have a common set of attributes; and enable standard operations to be performed against them in a common way, as well as allowing comments, attachments, etc. to be stored against them. Conclusion No one should have to open multiple browser windows / applications to check their workflow tasks, so oneSpotallows you to track all your tasks in one view and instantly lets you know when new tasks arrive.

Functional Testing Business Processes In Oracle BPM Suite 11g

Over the last couple of projects, I have been trying to establish a set of methodologies around initiation, development, deployment and management of business processes using Oracle BPM Suite 11g. In a series of posts, I will be taking about a lot of problem areas that can be effectively tackled in order to deliver a highly successful BPM project using this technology. The areas I intend to cover are around design and modelling best practices, team development, delivery methodologies, testing, troubleshooting, automation, performance tuning and operational management to name a few. However in terms of priority and the nature of pain experienced in each of these aforementioned areas, I am of the opinion that a great chunk is attributed to testing and maintaining code quality. Business processes are in constant need to be truly dynamic more than ever so that planned as well as ad-hoc changes can be promoted seamlessly. Also with Oracle BPM Suite 11g promising greater involvement of business users in terms of making deployable changes to the processes, ensuring that functionality and quality of the build/changes is maintained is very crucial. In the previous blog post, I had elaborated on unit testing strategies for business processes and methods of achieving them from the Oracle BPM Suite 11g studio. Proper unit testing is important in terms of validating that the core logical outcomes of a business processes are executed as expected. This ensures that all sequence flows and paths in a business process are tested and conformant. The post can be accessed here:

However, apart from testing the basic process paths a lot of other things demand testing too. Business processes can populate data in downstream systems, integrate with services, create events, send notifications, assign workflow tasks which may have their own lifecycle, etc. From a quality perspective all of these steps must be verified too. The overall functionality of the business processes must be thoroughly tested to determine their reliability before being deployed to the actual IT infrastructures. Organizations do invest in quality measures to evaluate features of business processes by employing a comprehensive testing strategy, involving analysts, developers, functional testers and business users. However they begin to invest in test automation very late in the project lifecycle. This poses a significant threat and concern, particularly when the business processes are continuously changing and evolving. An increasing pace of business process changes as well as applications growing more complex means that many functional testing teams are reaching the breaking point. Change management becomes difficult as there is always a risk of introducing new bugs down the way. As much as we all would like to have agility, the process is defeated by having improper developer operations. Having a regression functional testing suite very early in BPM project development cycle is, by my opinion a must have thing for any successful and timely implementation. More often than not, it is not the choice of technology that is responsible for this but rather the lack of some important know-how’s.
Typical Problem Areas and how Automating Functional Testing can help
In my interactions with different project teams, here are some typical questions they have, especially with respect to testing business processes in Oracle BPM Suite 11g.
  • Is it possible to really approach functional testing early in the project cycle considering the nature of composite applications involved (business processes, services, rules, etc.).
  • Human tasks components in business processes need to have user interfaces (which may have their own development lifecycle). How can functional testing be achieved before a fully operational UI?
  • Are their any developer tools, approaches or frameworks that can be utilized to automate functional testing?
  • What kind of a functional testing strategy is suitable, cost efficient and good to contain a wide variety of problems in the project lifecycle?
If we begin to investigate the possibilities, they may be endless. I, am particularly aware of many custom products which promises to take care of all of these. However hey tend to be expensive, have a steep learning curve and may not always fit the bill. This act as a big deterrent. However, in this article, I am keen on presenting an approach that is quick, easy, developer friendly and can be easily catered in the development phase of the project. The best way to begin with functional testing is to base it on the business process design. Model driven functional testing ensures that critical processes and their paths are covered and that any variation of those tests can easily be amended and rerun without adding to significant time to the test cycle. Functional testing can start earlier and better quality tests are produced and maintained in a test repository. This can then be part of the continuous build and integration iterations to ensure sufficient test coverage in accelerated delivery cycles. A part of model driven testing strategy was described in details with respect to ascertaining the cyclomatic complexity factor of the business processes and deriving test cases based on that. This approach is a very good starting point, especially because this can be used to build unit tests too. Once the basic process paths are determined, the functional test suites can incorporate a lot of test steps to cover the functional aspects such as validate business logic and rules, verify data across systems, check events and notifications, etc..
With all this being said, however, automation of functional testing will only be successful if an organization’s underlying quality fundamentals are solid and everyone clearly understands how testing can continuously support process iterations. Another big advantage to automate early is potential time savings. Considering that multiple stakeholders will need to repeat their testing task, every time something changes, doing them manually becomes a burden sooner or later. Have a look at the sheet I’ve compiled based on various time and frequency of performing functional tests and the effort involved in each case.
image
The columns marked in green in the above sheet reflects the most likely scenarios in terms of the amount of time spent on testing in various Oracle BPM Suite 11g projects. The effort involved in manual testing can range from 20 days to 4 months for a very simple and less frequently changing business process to a complex and often changing business process. This effort does not accounts for test planning but simply reflect the time spent in test execution. The columns in red represents scenarios that can be applicable for certain project types too and in those cases the amount of time spent in manual testing can be even 8 months. This offcourse is under the assumption that all quality standards of software functional testing are maintained.
Approach and Tooling
As indicated before, there are many players in this field providing varied approaches and option. This article will not debate their merits here. In the course of my involvement with BPM projects, I seems to incline towards SOAPUI. This is particularly due to the fact that it is pretty inexpensive (there is also a freeware if ~$350 license cost seems significant) and does most of the basic functional testing that can be considered acceptable from a quality perspective. It is fairly straight forward too and the learning curve is minimal. Chances are also high that in most projects there will be developers with considerable amount of experience using SOAPUI, if they have been involved in a SOA project in the past.
Having said that, this article will talk about how to plan and implement a comprehensive and automated functional testing suite for business processes developed using Oracle BPM Suite 11g.
The Employee Expense Approval Business Process
I will use the same employee expense approval business process that was discussed in my previous blog. This process has primarily all the ingredients that any complex business process has i.e it is fairly unstructured, have business rules, human workflow components, arbitrary cycles, gateways, events and writes to external applications.  As a refresher, the process map looks like below in the JDeveloper studio.
image
Technology Prerequisite(s)
The modelling workspace and the version of Oracle BPM Suite used in this demonstration is 11gR1PS5 i.e 11.1.1.6. The version of SOAP UI being used is 4.5.0 (excuse my backwardness in terms of working with the latest versions of these tools). I use the open source version of SOAPUI but for better productivity and ease of use you may consider using the Pro version too. A connection to the database is also necessary to be able to create certain database objects that the project requires. The version of the database, though, can be any generic with the only exception that it has to be Oracle database.The concepts discussed in this blog are however generic and applicable to any version of the above products.
Project Setup
In order to import the implemented expense approval project in JDeveloper, obtain the composite (ExpenseApprovalComposite.zip), unzip it and open the application (.jws) file in the studio. As can be noticed from the BPMN mode, the process populates some database tables with expense and payment records at various stages. To be able to deploy and execute the project, a database schema and certain tables needs to be created. On the Weblogic server side, there is a need to create a data source and add a new connection factory to the database resource adapter.
Doing this set-up is pretty simple. I have created some SQL and ANT scripts that can do all of this. Follow the instructions below to execute these scripts to create the required resources.
Create the Database Artefacts
  • Download and unzip the DatabaseSchema.zip file in a local directory.
  • Open the edit the create_expense_structure.bat and simply substitute the sys password with the one for the database you are using. This script assumes that a local XE flavour of the database is running. If this is any different then modify the bat file accordingly.
  • Open a command prompt and cd to the directory containing the bat file. Simply run the batch file in the prompt.
  • This will create a database user “EXPENSE” and two tables in it called EMPLOYEE_EXPENSE_PAYMENT and EMPLOYEE_EXPENSE_RECORD.
  • Verify that the scripts ran okay and the database artefacts were created.
Create the Weblogic Server Artefacts
  • Unzip the ResourceAdapter.zip in a local directory.
  • Open the build.properties file and substitute the middleware home location with the one being used. Also modify the values of the database server, weblogic server, domain, connection credentials etc. Properties that needs to be changed are marked in the build.properties file through comments.
  • Open a command prompt and cd to the local directory where the contents of the .zip file are placed.
  • Type ant makeDataSource to create the required datasource in the weblogic server to connect to the EXPENSE schema created in the step before.
  • Once the build is successful type ant createResourceAdapterEntries in the same command prompt.
  • This will create the database adapter connection factory and update it with the datasource created before. It also updates the DbAdapter deployment plan to commit all these changes onto the server.
  • Verify that the script executes successfully. If it doesn’t then there may be a problem in substituting values in the build.properties file.
  • As a last resort, you may configure these resources manually on the server or alternatively send me your configuration files along with the error(s).
Creating the Functional Test Suite
Once the project has been set-up and the required artefacts are created, it is time to look into creating a functional testing suite for the expense approval processes. Once we create the functional tests, they can be used for regression every time there are changes made to the business processes. A good practice is to always accept any changes when it is accompanied by a functional test case and when all existing test cases pass the regression. However the dynamics here may be dependent upon the level of business user’s involvement in creating ad-hoc changes to the business process after it is deployed. If business users are empowered to make dynamic changes, then it cannot be expected of them to create/modify the functional test cases supporting their changes. This situation demands a change control process in place. I am of the personal opinion that to ensure the quality of business processes running on the production environment, changes must always be supported with regression. Whether business analysts and process analysts create these tests or whether they get it done from developers is a matter of having a governance around the change control.
Nevertheless it is pertinent to have an initial set of functional tests. If you managed to read the previous blog post on unit testing business processes in Oracle BPM Suite 11g, you must have noticed how we employed cyclomatic complexity to determine the number of unique test cases required to provide a cent percent test coverage. From the functional testing perspective, we will still create the same number of test cases but also have a series of test steps to validate that the business process is indeed doing all the right things.
As determined before the following test cases shows the various routes a process can take from all the conditional nodes. To ensure the utmost quality, we will have as many functional test cases too.
Test Case 1: Submit, Auto Approve, Finance Approve, Approve Test Case 2: Submit, Auto Approve, Finance Reject, Refer Test Case 3: Submit, Refer, Manager Approve, Finance Approve, Approve Test Case 4: Submit, Refer, Manager Approve, Finance Reject, Refer Test Case 5: Submit, Refer, Manager Reject, Reject
The following section will demonstrate a series of test steps that will be created to fully functionally test one of the test cases (here Test Case 3: Submit, Refer, Manager Approve, Finance Approve, Approve). Other test cases can be created in the similar lines.
  • Deploy the ExpenseApprovalComposite to a BPM server and grab its wsdl  from the enterprise manager (since the business process has a message start event it can be invoked as a web service)
  • Launch soapUI and create a new soapUI project. Put ExpenseApprovalTest as the project name and copy the wsdl localtion in the Initial WSDL/WADL field. Also check the option to create sample requests for all operations.
image
  • This will create a soapUI project in the existing workspace. Now right click on the project and click on New TestSuite to create a new test suite inside the project. Name it as Expense Approval Functional Tests.
  • A test suite can have multiple test cases that can be run either individually or all together as part of the test suite (either sequentially or in parallel). Once the test suite is created, right click on it to create a New TestCase inside it. Name the test case as T3_Refer_ManagerApprove_FinanceApprove_Approve.
image
  • A test case is constituted of a series of test steps that are executed in sequence one after another. Testing a business process will involve initiating the business process through the message based event, capturing data in the human tasks, simulating user behaviour on the workflow screens and verifying that once the process is completed, it is in the right state, populates the right statuses in the end systems and follow the expected process path.
  • A test case in its due steps may need to refer some constant values or store values intermittently to be used in later steps.  soapUI allows creation of global, test suite level as well as test case level properties. While properties at the test suite level are more appropriate to hold any constants (or pass them between multiple test cases), properties in the test cases are used primarily for passing information between multiple steps. Create the following properties at each level. You will notice that the test suite properties have been set to some constant values.
image
  • The properties can then be accessed as variable using the below construct.
${#TestSuite#TASKSTATE}
${#TestCase#CONVERSATION ID}
  • It is commonly believed that a business process with human tasks will need a functional user interface in order to test it. However it is not true. Oracle BPM Suite 11g offers both Web Services as well as JAVA based API’s to interact with the workflow tables. The operations in these API’s can be used to search and query for tasks, update task payloads and outcome, attach comments and attachments etc. Access the following posts to be able to get a glimpse of the human workflow API’s.
  • In this article I will be using the Web Service API’s to simulate the human tasks when building the functional test steps. The first step in the test case is to create a unique identifier (random number) that can be used to set the conversation id while invoking the business process. Setting up a conversation id is important as the composite/component instances ID’s can be determined by querying the dehydration store against it. In soapUI a Groovy Script based test step can be used to generate a random number. Name this step as Create Conversation Identifier and paste the following groovy code snippet in the code editor.
testRunner.testCase.setPropertyValue("CONVERSATION ID", "${java.util.UUID.randomUUID()}")
log.info(java.util.UUID.randomUUID())
image
  • The groovy script dynamically populates the CONVERSATION ID property in the test case with the uniquely generated number.
  • Add a new SOAP Test Request and name it as Submit Employee Expenses – submit expense application. When needing to specify the operation to invoke selectApproveEmployeeExpenseBinding –> submitExpenseApplication from the available options. As a matter of best practices, I typically append the name of the process model object that is being interacted with in the name of the test step. This is important to maintain a visibility of which event in the process is invoked or which human task is updated/actioned.
image
  • Copy the following payload to trigger the business process. The combination of business unit and expense total will force the business rule activity in the business process to send this instance to a manager review.
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:app="http://www.rubiconred.com/ApproveEmployeeExpense"xmlns:empl="http://www.rubiconred.com/bpmtest/empl"> <soapenv:Header/> <soapenv:Body> <app:submitExpenseApplication> <empl:Expense> <empl:Employee> <empl:EmployeeID>988977698</empl:EmployeeID> <empl:FirstName>Arrun</empl:FirstName> <empl:LastName>Pareek</empl:LastName> <empl:Email>arun.pareek@rubiconred.com</empl:Email> <empl:PhoneNumber>0424987673</empl:PhoneNumber> <empl:ManagerID>6312313</empl:ManagerID> <empl:DepartmentID>132313</empl:DepartmentID> <empl:BusinessUnit>3</empl:BusinessUnit> </empl:Employee> <!--1 or more repetitions:--> <empl:ExpenseList> <empl:Amount>80000</empl:Amount> <empl:Type>Travel</empl:Type> <empl:Date>2013-02-06</empl:Date> <empl:Justification>Travel on Business</empl:Justification> <empl:Authoriser>Matt</empl:Authoriser> </empl:ExpenseList> <empl:ExpenseList> <empl:Amount>40000</empl:Amount> <empl:Type>Travel</empl:Type> <empl:Date>2013-02-07</empl:Date> <empl:Justification>Travel on Business</empl:Justification> <empl:Authoriser>John</empl:Authoriser> </empl:ExpenseList> </empl:Expense> </app:submitExpenseApplication> </soapenv:Body> </soapenv:Envelope>
  • It is also very important to note here that the business process, by the virtue of message based start and end events, is implemented as an asynchronous web service. When the process completes it will send an asynchronous call-back to the initiator at a call-back address. Basically Oracle BPM Suite 11g supports WS-Addressing. The conversation id generated in the previous step can be used as the unique correlation identifier when invoking the submitExpenseApplication operation. Also the initiator can define a call-back address (Reply To in the addressing header) where the engine can send the call-back response.
  • In soapUI WS-Addressing related headers can be added by clicking on the WS-A tab in the test step. The WS-A addressing enabled option has to be selected and the must understand flag is to be set to TRUE. The MessageID field is populated with the CONVERSATION ID test case property. Also provide a reply to address like below to let the BPMN engine register the the call-back address for this instance. This will be used in a future step.
  • The first test to be performed is to be able to successfully determine whether the soap request was successfully delivered to the end point. Since it is an asynchronous request, the response is not received in the same thread. Hence the only way to determine a success is to verify that the HTTP status code received is correct. To do that, add a status assertion for a Valid HTTP Status Code.Enter 202 (Accepted) as the expected status code. This status code reflects that the request has been accepted for processing, but the processing has not been completed. The request might or might not eventually be acted upon, as it might be disallowed or fault out when processing actually takes place. The 202 response does not requires an initiator’s connection to the server persist until the process is completed.
image
  • The above test step will create an instance of the expense approval process which can be seen and verified through the enterprise manager.  The next test step to add is to retrieve the composite instance, ecid and the composite state from the CUBE_INSTANCE and COMPOSITE_INSTANCE tables in the dehydration store against the conversation id. The conversation id supplied by the initiator is stored in the CUBE_INSTANCE table. This will correctly grab the instance metadata for the instance that was created in the test steps above.
  • Add a new JDBC Request test step and name it as Retrieve Composite Instance ID. A JDBC test step will require a JDBC driver and connection string and an SQL query to be specified. But before this can event work, you will need to copy the ojdbc5.jar under the $soapUIHome/bin/ext folder. This jar facilitates the connection to an oracle database from soapUI. The following values can be specified for the different properties in the JDBC request test step. Be careful to substitute the database connection details with the ones reflecting your environment.
Driver : oracle.jdbc.driver.OracleDriver Connection String : jdbc:oracle:thin:ps5_soainfra/welcome123@localhost:1521:xe SQL Query :SELECT COMPOSITE_INSTANCE.ECID, COMPOSITE_INSTANCE.ID, CUBE_INSTANCE.COMPOSITE_NAME, CUBE_INSTANCE.COMPOSITE_REVISION, COMPOSITE_INSTANCE.STATE FROM COMPOSITE_INSTANCE, CUBE_INSTANCE WHERE COMPOSITE_INSTANCE.CONVERSATION_ID =CUBE_INSTANCE.CONVERSATION_ID AND CUBE_INSTANCE.CONVERSATION_ID= '${#TestCase#CONVERSATION ID}'
image
  • The SQL query returns important metadata with respect to the composite instance that is created. soapUI allows adding assertions to each test steps in a test case. Assertions are a very good way to add functional validation at each test steps. The following screenshot shows the four different assertions that are added to this test step. The composite name and version are validated against the values of these properties specified in the functional test. The state value of indicates that the process instance has been initiated.
image
  • The instance might not be created immediately and available to be queried against the dehydration store. It may take a couple of seconds for the instance to be initiated. It may be required to add a Delay test step in between the SOAP and JDBC request test steps.
  • The values retrieved from the JDBC request test step and the original payload passed in the SOAP test request can be saved to the test case properties so that they can be used in the next test steps. A Property Transfer test case can be added for this purpose. Name the step as Capture Process Properties and add four test step properties viz. ECID, INSTANCE ID, PAYLOAD and EMPID.For each of the properties the appropriate values can be captured by specifying the source test step, the variable (request, response or raw request, etc.) and the target test step property. When the property transfer test step is executed the test suite properties will have the most current values depending upon the payload entered and the results retrieved by querying the database schemas.
image
  • Once the process is triggered/initiated it creates an entry in the EMPLOYEE_EXPENSE_RECORD table with the relevant employee and expense details. TheEXPENSE_APPROVAL_STATUS column at this stage should be marked with a SUBMITTED status, assuming it is used for tracking the state of expenses. From a functionality standpoint it should be verified whether this is indeed the status. To be able to do that is soapUI will mean creating another JDBC Request test step. Name this as Verify Expense Record – Create Expense Record (here Create Expense Record is the name of the BPMN activity in the process model which executes the database update).
  • In the JDBC test step, the connection string this time should be specific to the Expense schema that was created initially. The SQL query to fetch the record for a given employee is determined uniquely by the employee id and the record reference (populated with ECID).
SELECT FROM EMPLOYEE_EXPENSE_RECORD WHERE EMPLOYEE_ID ='${#TestCase#EMPLOYEEID}' AND RECORD_REF='${#TestCase#ECID}'
  • In the same test step, assertions can be added to verify the expense approval status and also the expense approver, if applicable. A JDBC status validation assertion is also added as this is something that should preferably added to each JDBC Request step.
image
  • For this particular test case, the business rule will determine that a manager approval is required before the expense is presented to the finance user. The business process instances initiates a workflow task that is available to the designated user in his in-tray. In the most basic scenario the task is available to be be actioned by the user(s) assigned to the process swimlane role.
  • At this stage if the audit trail of the instance is viewed it will correctly represent that the process has initiated and is waiting at the Approve Employee Expense human task activity.
image
  • Interacting with the task is very easy and straight forward. Fortunately there is no need for a properly designed UI to be able to test the user interactions with the business process. Oracle BPM Suite 11g provides Human Workflow web services that can be invoked with the proper operations to simulate the UI behaviour.
  • When an instance reaches any human task activity the human workflow engine initiates a task with the appropriate task metadata along with a unique taskId and number.
  • The workflow engine captures this metadata in the WFTASK table in the dehydration store. Querying this table against the task definition name, composite instance id and the ECID will return a task identified for the initiated task.
  • Create a JDBC Request test step yet again and name it as Retrieve Task Identifier – Approve Employee Expense. Use a connection string specifying the details of the SOAINFRA schema and the following sql query to uniquely determine the task identifier for the initiated human task.
SELECT WFTASK.TASKID FROM WFTASK WHERE WFTASK.TASKDEFINITIONNAME='${#TestSuite#MANAGERTASK}' AND WFTASK.ECID =  '${#TestCase#ECID}' AND WFTASK.STATE = '${#TestSuite#TASKSTATE}' AND WFTASK.COMPOSITEINSTANCEID=  '${#TestCase#INSTANCEID}'
image
  • The task identifier that is fetched as a result of the sql query will be used in the future test steps to query and update task details using the workflow APIs. In order to use it forward, a property transfer step (Capture Human Task Identifier – Approve Employee Expense) has to be created to copy this value to the TASKID property of the test case.
image
  • This task id will now be used in the forthcoming test steps to operate on the human task. In order to do so, first of all we will need to add two more WSDL files to our soapUI test project. These WSDL files can be located at the following end points after substituting the appropriate values for the host and port where the soa server is running. This will add all the operations in these services to the soapUI project catalogue.
http://<host>:<soaserverport>/integration/services/TaskService/TaskServicePort?wsdl
  • Once the WSDLs have been added, add another SOAP Request test step. When prompted for a name put Get Task Details from Task Indetifier – Approve Employee Expenses (long but very descriptive, pinning down the actual human task activity in the process model for which this test step is performed).
  • Select the getTaskDetailsByID operation in the TaskQueryService wsdl in order to retrieve the task details by providing the task identifier. Invoking this operation also requires appropriate credentials. The task identified is captured in the test case property TASKID. The operation returns the entire human workflow task consisting of the payload as well as additional task metadata. In order to ascertain that all functionality requirements are met, a couple of assertions could be added. As this is a synchronous service invocation, add assertions for a valid soap response, schema compliance and that no soap fault is encountered. These assertions are available in the Compliance, Status and Standards tab of the Add Assertion window.
  • Apart from the web service assertions, also verify that the task state is correctly set to a value of Assigned. Also verify that the task version reason is equal toTASK_VERSION_REASON_INITIATED confirming that the task is in an initiated state at this stage. These elements are child of the systemAttributes element in the task data and can be queried with the following XPath expressions.
/*:Envelope/*:Body/*:task/*:systemAttributes/*:state
/*:Envelope/*:Body/*:task/*:systemAttributes/*:versionReason
image
  • The task metadata returned from the service call has a section called task payload. This represents the initial arguments passed to the human task when it is initiated. As part of a manager approval or any other interaction with a form based user interface, process participants may change data in the forms (editable arguments) after which they may action the task (reassign, delegate, approve, reject, suspend etc.). Depending upon the functional requirements of task management, we may have to create appropriate steps at this point and validate the behaviour. For instance, tasks may always be reassigned to a team leader who may then reassign them to actual participants who have to then action them. In this case a test step to invoke the reassignTask operation in theTaskService service may be required. This test case assumes that a team lead simply puts in his comments (which is part of the payload here and saved to the expense record table) and approves the expenses.
  • Add a property transfer step and name it as Capture Human Task Payload – Approve Employee Expense. The purpose of this step is to take the task payload from the response of the previous step, modify the comments child in the approval status element (replace it with a value such as “Manager Approved”) and transfer this to the TASKPAYLOAD property in the test case. In this example, an XQuery function is used to achieve this however any other smarter way can be employed too.
image
  • Once the payload has been modified (mimicking the user’s behaviour in the form), the task metadata has to be updated in the dehydration store so that the workflow engine is aware of the changes made. This can be done by passing the modified task data to the updateTask operation in the TaskService. Add another SOAP Request test step to call it as Update Human Task – Approve Employee Expense. Choose updateTask as the operation from the TaskService. soapUI will populate the default request but the service will work with only the credential of the user trying to perform the update and the task itself (which is now available in the TASKPAYLOAD property). Apart from the usual soap based assertions an important assertion to be determined here is the state of the verionReason element in the task. Verify that it is now modified to TASK_VERSION_REASON_UPDATED.
image
  • The final step in moving this task out of the manager’s inbox is to emulate his action, which can either be approve or reject. Task outcomes can be committed to the workflow engines through theupdateTaskOutcome operation in the TaskService. Once an outcome has been received by the human task engine it will send a call-back to the business process with the response, which is waiting for it at the human task step. The business process will then proceed ahead, update the expense record table with the status of manager’s decision and evaluate the manager’s outcome at the exclusive gateway. As we are interested in the scenario where the manager approves the expenses the business process will initiate a task for the finance approver.
  • The operation only requires the outcome flag and the task identifier along with the user credential in able to submit the request. There is one important thing to be kept in mind in this test step. Only those users who have been mapped to the swimlane role to which the task is assigned or reassigned to can update the task outcome.
image
  • Create another JDBC Request step to verify that the expense record is updated with the expected value.
  • As the task moves to the Finance Approver’s inbox, the WFTASK table can be again queried to determine the task id corresponding to this task. The following query (similar to the one used before) can be used to retrieve the new task identifier. The only difference in this query is the TASKDEFINITONNAME filter, which basically have to be set with how the finance approval task is called.
SELECT WFTASK.TASKID FROM WFTASK WHERE WFTASK.TASKDEFINITIONNAME='${#TestSuite#FINANCETASK}' AND WFTASK.ECID = '${#TestCase#ECID}' AND WFTASK.STATE ='${#TestSuite#TASKSTATE}' AND WFTASK.COMPOSITEINSTANCEID= '${#TestCase#INSTANCEID}'
  • At this point, a few of the test steps that were created to fetch the task id from WF task table, copy it to the test case TASKID, retrieve the task payload, modify it and then update the task outcome has to be repeated for the finance approval task too. To maintain visibility, append the name of the human task i.e Administer Expense Payment to each of these steps. After these steps are created the functional test case will reach a point where it has emulated all human steps in the business process, asserted the functional points, and verified that the instance is behaving as expected.
image
  • Once a member of the finance approval team approves the expenses, notice that the business process model updates the payment register as well as the expense record tables. The business process then ends with a message based event viz. expense paid. This triggers an asynchronous call-back to the client calling the business process if it is waiting for one.
  • As a next logical step in the functional testing scenario is to verify that the updates to the end systems has correctly happened and also to somehow receive the process end response to assert if it has the right status. It is also recommended to have some additional test steps to even verify that the instance state is marked as completed in the dehydration store to fully satisfy all functional aspects.
  • To begin with these assertions, add a Mock Response test step in the test case and name it Receive Process Callback Response – expenses paid. Select the expensePaid callback operation from the ApproveEmployeeExpensePortTypeCallBackBinding interface.
  • The port and the path has to be 2222 and ApproveEmployeeExpensePortTypeCallBackBinding respectively. This however depends upon what was entered in the original/initial soap request in the Reply To field. This step will automatically be populated with the callback response containing the approval status of the expense. Add the normal web services assertions to this test step as well an Xpath assertion to verify that the approval status is equal to FINANCE APPROVE
image
  • It is also worth while to add a property transfer step to Capture the Expense Amount total and assign it to the EXPENSETOTAL test case property. This is done by summing all the amounts in the expense list submitted by the employee.
image
  • As indicated earlier, the final steps in the test case is to create a couple of additional JDBC test steps to verify that the updates to the tables are consistent with the instance data by adding the necessary assertions.
image
  • Optionally it can also be verified that the process instance has successfully completed. This can be queried against the dehydration store tables. The CUBE_INSTANCE and the COMPOSITE_INSTANCE table stores the process instance header and the state data. The state column has a numerical value which can be translated into a very informative physical state of the instance. The given query can be executed to retrieve and decode the status from these two tables.
SELECT (CASE
WHEN CU.STATE=0 THEN 'INITIATED'
WHEN CU.STATE=1 THEN 'OPEN AND RUNNING'
WHEN CU.STATE=2 THEN 'OPEN AND SUSPENDED'
WHEN CU.STATE=3 THEN 'OPEN AND FAULTED'
WHEN CU.STATE=4 THEN 'CLOSED AND PENDING'
WHEN CU.STATE=5 THEN 'CLOSED AND COMPLETED'
WHEN CU.STATE=6 THEN 'CLOSED AND FAUTED'
WHEN CU.STATE=7 THEN 'CLOSED AND CANCELLED'
WHEN CU.STATE=8 THEN 'CLOSED AND ABORTED'
WHEN CU.STATE=9 THEN 'CLOSED AND STALE'
WHEN CU.STATE=10 THEN 'NON-RECOVERABLE'
ELSE CU.STATE || 'UNKNOWN'
ENDAS CUBE_INSTANCE_STATE, (CASE
WHEN CO.STATE=0 THEN 'RUNNING'
WHEN CO.STATE=1 THEN 'COMPLETED'
WHEN CO.STATE=2 THEN 'RUNNING WITH FAULTS'
WHEN CO.STATE=3 THEN 'COMPLETED WITH FAULTS'
WHEN CO.STATE=4 THEN 'RUNNING WITH RECOVERY REQUIRED'
WHEN CO.STATE=5 THEN 'COMPLETED WITH RECOVERY REQUIRED'
WHEN CO.STATE=6 THEN 'RUNNING WITH FAULTS AND RECOVERY REQUIRED'
WHEN CO.STATE=7 THEN 'RUNNING WITH FAULTS AND RECOVERY REQUIRED'
WHEN CO.STATE=8 THEN 'RUNNING WITH SUSPENDED'
WHEN CO.STATE=9 THEN 'CLOSED WITH SUSPENDED'
WHEN CO.STATE=10 THEN 'RUNNING WITH FAULTS SUSPENDED'
WHEN CO.STATE=11 THEN 'COMPLETED WITH FAULTS SUSPENDED'
WHEN CO.STATE=12 THEN 'RUNNING WITH RECOVERY REQUIRED AND SUSPENDED'
WHEN CO.STATE=13 THEN 'COMPLETED WITH RECOVERY REQUIRED AND SUSPENDED'
WHEN CO.STATE=14 THEN 'RUNNING WITH FAULTS, RECOVERY REQUIRED AND SUSPENDED'
WHEN CO.STATE=15 THEN 'COMPLETED WITH FAULTS, RECOVERY REQUIRED AND SUSPENDED'
WHEN CO.STATE=16 THEN 'RUNNING WITH TERMINATED'
WHEN CO.STATE=17 THEN 'COMPLETED WITH TERMINATED'
WHEN CO.STATE=18 THEN 'RUNNING WITH FAULTS AND TERMINATED'
WHEN CO.STATE=19 THEN 'COMPLETED WITH FAULTS AND TERMINATED'
WHEN CO.STATE=20 THEN 'RUNNING WITH RECOVERY REQUIRED AND TERMINATED'
WHEN CO.STATE=21 THEN 'COMPLETED WITH RECOVERY REQUIRED AND TERMINATED'
WHEN CO.STATE=22 THEN 'RUNNING WITH FAULTS, RECOVERY REQUIRED AND TERMINATED'
WHEN CO.STATE=23 THEN 'COMPLETED WITH FAULTS, RECOVERY REQUIRED AND TERMINATED'
WHEN CO.STATE=24 THEN 'RUNNING WITH SUSPENDED AND TERMINATED'
WHEN CO.STATE=25 THEN 'COMPLETED WITH SUSPENDED AND TERMINATED'
WHEN CO.STATE=26 THEN 'RUNNING WITH FAULTED, SUSPENDED AND TERMINATED'
WHEN CO.STATE=27 THEN 'COMPLETED WITH FAULTED, SUSPENDED AND TERMINATED'
WHEN CO.STATE=28 THEN 'RUNNING WITH RECOVERY REQUIRED, SUSPENDED AND TERMINATED'
WHEN CO.STATE=29 THEN 'COMPLETED WITH RECOVERY REQUIRED, SUSPENDED AND TERMINATED'
WHEN CO.STATE=30 THEN 'RUNNING WITH FAULTED, RECOVERY REQUIRED, SUSPENDED AND TERMINATED'
WHEN CO.STATE=31 THEN 'COMPLETED WITH FAULTED, RECOVERY REQUIRED,SUSPENDED AND TERMINATED'
WHEN CO.STATE IN (32,64) THEN 'UNKNOWN'
ELSE CO.STATE || ''
ENDAS COMPOSITE_INSTANCE_STATE FROM CUBE_INSTANCE CU, COMPOSITE_INSTANCE CO WHERE CU.ECID=CO.ECID
AND CU.ECID='${#TestCase#ECID}'
AND CU.CMPST_ID='${#TestCase#INSTANCEID}'
  • As a last step verify that the state in the cube instance table is CLOSED AND COMPLETED and COMPLETED in the composite instance table.
The test case is now complete in a way that it can successfully test all the major functionality of the business process. This increases operational as well as developer efficiency since continuous changes to business processes can be supported without too much worrying about them breaking existing logic. From here it is also possible to create functional test cases to test all possible paths of the business process. These test cases can be ran together as part of an integrated test suite. Creating additional test case is very easy as it mostly involves cloning an existing test case and making some necessary modifications with test data and steps. Once they are created they can be run together as part of the original Expense Approval Functional Tests suite either sequentially or in parallel.
image
Conclusion
soapUI also provides a way to run and extract reports out of these tests through ant/maven and hence it is extremely easy to incorporate them in the release cycle. If you can use soapUI pro, you have the out of the box ability to generate a wide variety of reports for the executed tests too. The basic version can be used to publish JUnit type tests, which I believe are good enough if considering running these tests in a continuous integration fashion.
Business process testing covers how well the system executes the functions it is supposed to execute; including user actions, data manipulation and integrations. It is done to validate the solution as defined in the business requirements and detailed technical specifications document for each processes. It may often contain the highest overall number of test cases, as it is focused on testing the entire system, not just small bits or units of code. Business processes are tested to ensure that the edits, changes, and outputs conform to the required specifications and/or expectations. Business process testing is also often the most complex area of testing as it involves testing processes during development cycles as it occurs and also when processes evolve through multiple changes post deployment. Automating this testing process will provide extreme productivity, quality and add dynamism to business processes. This article can be used as a reference to carve out a strategy to do so. Off course there may be may way but in my experience, I have found out this way to be effective, easy and without any financial obligation to invest in expensive tools.
The composite project, setup scripts, soapUI projects and some sample reports used in this article can be found here. Please feel free to share opinions, feedback and suggestions and I will be glad to provide any help.

Industrialized SOA – Where are you? – Part 1

In my job I have had the privilege of working on some high profile SOA projects but more satisfyingly I have been able to see most of them through to success. This process can be rewarding but it also has its downsides and it is not (unfortunately) all Greenfields and jelly beans. In fact a lot of what I do is fire fighting and it is not easy rescuing projects on the verge of failure. You know the ones I am talking about, right? The toxic ones… the ones that no one wants to touch with a 10-foot pole.

My wife and I were fortunate to be eating an amazing Indian cuisine cooked by my colleague Arun’s wife (some of you may know Arun from his popular blog on Oracle BPM or as the author of Oracle SOA Suite 11g Administrator’s Handbook). This catch up over great food and wine turned out to be an excellent opportunity to really talk about our passion for software delivery; perhaps much to the boredom of our respective partners.

One of the things Arun had mentioned was that he had recently been impressed by Mark Nelson’s musing about SOA Development & Delivery. As a long-time subscriber to Mark’s blog, I was keen to check it out. So after a sound night’s sleep, I was quick to Google ‘RedStack’ and here it was….
Short
Sweet
To-the-point
….And it hit home!
While Mark did not say this directly, it seems clear to me that the SOA brand is tainted (in at least some contexts) and people are asking questions…
  • Is SOA really agile when you need to wait months for a new complex feature to be released and 15 components are impacted by it?
  • Does BPM really empower the business when your server goes down and the root cause is not easily discoverable?
  • Is Cloud really a way to lower risk and increase productivity when the elasticity comes with lengthy periods of non-realised benefits, ramp up and confusion?
If this resonates with you, you are at the very least... not alone.
So what is going wrong? Why in some cases, is the buzz seen to be spin, to be vendor marketing; little more than bended truths?
While I obviously don’t know all the answers I’ll take a stab and say it has more to do with maturity levels and insensible decisions then the failings of SOA and its promise.
As novelist Ellen Glasgow once said “All change is not growth, as all movement is not forward.” and with that I hope you’ll allow me to take this opportunity to wax lyrical on Mark Nelson’s gem by way of a few practical tips learnt in the trenches.
Version Control
Don’t just version control your source code. Version control EVERYTHING that will change! This includes:
  • OSB Configuration
  • SOA Projects
  • Customization Files
  • Composite Configuration Plans
  • WebLogic Deployment Plans
  • Build Scripts
  • Test Scripts
  • Deployment Scripts
  • Release Scripts
  • Start-up & Shutdown Scripts
  • “Health Check” Scripts
  • Application Server Configuration
  • Puppet Configuration
  • (Optionally) The Binaries Note: This is unnecessary and redundant if you follow good binary management which I’ll discuss in the next blog instalment.
  • And so on….
…But don’t let version control stop at the source code and configuration assets, you should version control all of your documentation as well. For this, I would strongly suggest a wiki. However, there are plenty of expensive EDRMS’ out there too if you prefer to let your documents die a slow and horrible death.
Unit Testing
In the same vein as above, unit test everything! It is a myth that vendor tools don’t cut it when it comes to testing. You just need to know how! And sadly, they don’t always make it easy for you. Here are some suggestions:
  • Use a Web Service client such as soapUI or the OSB console to test for expected output of your synchronous services
  • Use instrumentation to test asynchronous functionality. If you don’t receive a response, don’t worry. You can let your component(s) log at the appropriate steps and your test framework to collate the times/payloads to determine test success or failure. Remember to make the logging pluggable so you don’t impact performance in production!
  • Don’t limit yourself to the happy path; test the unhappy ones as well.
  • Automate ALL unit tests and let Maven kick them off in its lifecycle.  Even tests in soapUI and OSB console can be automated you just need to know how.
  • Google is your friend. Are you frustrated with Oracle Business Rules (OBR) because they can’t be easily tested? If so then you better check out Arun’s blog about testing business services (http://beatechnologies.wordpress.com/2012/03/09/automating-business-rules-testing-in-oracle-soa-suite-11g/) as it could be adapted to test OBRs through Maven. Alternatively, there is a Free Tester GUI available at https://code.google.com/p/oracle-business-rules-tester/. Case in point. Even if Google Corporation is not your friend, Internet search most definitely IS your friend.
Acceptance Testing
We can’t migrate from 11g patch set 4 to the latest patch set 7. It would be too expensive”.
If you have heard a similar statement before then you my friend (unfortunately) may be involved in a project with poor quality control. As a big fan of test automation who has worked on many projects with little to no test coverage, it saddens me to still meet people absolutely terrified of change.
Did you know that acceptance tests, integration tests, unit tests can all be automated? Yep.
But Craig, I know they can be automated but it is too technical and expensive.”
As a former believer of such a statement, I have to say I am converted. There are a number of very mature languages out there which allow for acceptance tests to be written in a business language. Here is one such example:

[code]Scenario: Replaced items should be returned to stock Given that a customer buys a blue garment And I have two blue garments in stock And three black garments in stock. When he returns the garment for a replacement in black, Then I should have three blue garments in stock And two black garments in stock[/code]

Can you read the above and understand what needs to be manually tested? If you can, then you may be surprised to know that this statement is actually executable and can be used to check whether the product delivers as expected… automatically. If the product doesn’t meet the criteria then it will fail the acceptance test and the whole team will know about it. Yep. This isn’t magic. It is a language called Gherkin and through tools such as Cucumber you can automatically acceptance and regression test your application. What a wonderful world we live in.
Automation as a Project
One of my favourite items in Mark’s blog was the statement that “people often try to automate the build process on a project by project basis.”
This is one of the biggest no nos in my book. When you’re providing all this agility through reusable components why on earth would you want to neglect this valuable discipline in your automation?
Always keep your configuration separate from your implementation via a DSL or Properties file. The moment you hardcode a password or a URL in a shell script which kicks off WLST you are basically saying “Hands off my script!”
In short,
[code]Sharing = Caring Shell Script + Ant + WLST != "A Good Automation Solution"[/code]
As Luke Kanies founder of Puppet put it “ssh in a for loop is not a solution”.
What’s Next
In future instalments I will reveal more practical tips on:
  • Continuous Delivery
  • Configuration Management
  • Binary Management
  • Dependency Management
  • Virtualization
  • Data Management
  • DevOps
If you liked this post, please subscribe or share it with your network. We love feedback so please don’t hesitate to hit the comments up below!