B2B integration v/s SOA, Cloud, Social, Mobile and BigData

Remember B2B integration using EDI? And later using Rosettanet, OAGIS, HL7 and a whole lot of other standards that gets large global supply chains to work. I’v worked a lot in this specific area and very keenly follow the B2B integration landscape to see what the latest trends and approaches are. “SOA” during the last decade and “cloud” now have completely taken the thunder (if there was any ever) from industry standard based B2B integration. Search for a article on EDI or Rosettanet on google.com and check the published date of the top results. It is obvious that B2B integration is not sexy any more.

But really what does standards based B2B integration solve, and do SOA, cloud or any of the othe emerging technologies offer a better alternative solution to the problem space? Are these new technologies completely unrelated disciplines which unfortunately encroach upon a limited skill-set and IT budget? Or do they make in some way make standards-based integration obsolete, a thing of the past? Or is standards based B2B-integration something that has its own secure space and will gradually evolve at its own leisure pace, unperturbed by all the SOA and cloud madness around? Or is it a solved problem? Here’s my thoughts on some of these questions.

Firstly, B2B – integration itself is a vague term. What does it try to solve? At the most abstract level it simply enables two business to establish a synergy and achieve a “whole” greater than each of their respective individual “part”s. In a supply chain, it typically consists of various links collaborating and sharing information effectively to help achieve greater efficiencies and profits. IT is simply an enabler, the driver being the realization that businesses at various stages of the supply chain can cooperate and gain more by striving for greater global efficiencies over trying to optimizing individual profits in silos.

A key requirement of successful B2B integration – is seamless flow of information between the partnering businesses. This is where the IT aspect kicks in. IT can enable successful B2B in several ways. Shared databases, portal integration, message based integration, good ol’ pen, paer and fax machine + document management systems, all or any of these can help two businesses desiring to collaborate, exchange the information required for successful B2B integration in a efficient or timely manner.

Of all of these, message based B2B integration is of high interest to me. The idea is simple – two businesses willing to exchange information, do so using pre-defined message formats. However the challenge lies in defining these message formats. A supplier sells to multiple retailers or distributers. A retailer recieves supplies from multiple suppliers and distributers. If every supplier sends information in a custom message format, the retailers IT system will need to handle hundreds and thousands of message formats and transform them to a common format which the retailer’s internal IT processes can receive. Translation of messages from one format to another is a manual, labor intensive and thus expensive process. Therefore you need some predefined, standard message formats and associated semantics that all suppliers and retailers (or any two links in a supply chain) can agree upon – so that the need for transformation can be eliminated. This was the promise of industry standard messaging formats such as X12, EDIFACT, Rosettanet, OAGIS, etc etc. They defined libraries of message definition and their usage scenarios and as long as the two business followed a common standard there was no need for any transformation of the message exchanged between them.

So was it all hunky-dory? Not exactly. Defining standards is not an easy job at all. Firstly, competing businesses do not like to agree upon anything. Even the cost benefit of a standard messaging library is not sometimes enough for businesses to make concessions required for a common ground to be reached. Also, on a more abstract level, the nature of information required to be exchanged between two parties for completing a business transaction depends upon the context within which the transaction instance is created. No two such transaction context are 100% identical. Hence the information required by A & B to integrate can be different from the information required by C & D. A common messaging library that address the integration needs of A-B and C-D are is going to be super-set of the messages, message-defintions, used in both the contexts. Extrapolate this over a entire industry, the standard becomes bloated. When a standard message library and each individual message definition within the library becomes bloated – every integration scenario ends up using only a very small subset of the standard. Each subset becomes unique again. Transformation is needed to transform various subsets of this standard into a unique format that the IT systems of a single business can process. The cost-benefit of standards based B2B starts to lose its sheen.

Inspite of the above problem, having standard message-libraries makes B2B integration a lot easier than by not having standard message-libraries. The transformation from one subset of the standard to another is a comparitively easier task than transforming messages encoded in random custom formats. Eventually every closely related group of businesses agreed upon a common subset of the standard (often called variant in the EDI world). EDI based B2B integration became a norm. It came with high CAPEX but relatively low OPEX. It took significant cost and time to build standards based messageing B2B interfaces for your businesses. However, once the IT systems in your business were capable of sending and receiving messages in a certain standard (or variant / subset of the standard) you pretty much could do B2B integration with any business which supported that standard at relatively low additional cost.

Several standards emerged, each targeted for specific industries and verticals (integration contexts) and businesses invested heavily in enabling their systems for messaging using the standards they were interested in.

However a lot of companies could not afford the costs required to standards enable their IT systems. An alternative approach for B2B integration started gaining traction. B2B integration hubs emerged which supported any-to-any integration. These hubs typically operated in a per-transaction-cost model. There hubs allowed businesses to send and receive messages to multiple trading partners using their own message formats. However the hub would take care of transforming the messages to a format required by the recipient. The transformation was provided as a service. Each hub supported multiple such sender-receiver pairs and could reduce the cost of integration for any individual business (spoke system) as result of efficiency of scale. They had the expertise required for transformation and supported out of box integration for interfaces of various ERP packages that made up the IT landspace of the businesses participating in the integration. This was the B2B integration as a service. Or B2B in a SaaS model way before “SaaS” gained popularity in the post-cloud era.

The shared database approach also gained some transaction. Instead of each business exchanging messages every other business global virtual database could be updated with a shared information – and subscribers good fetch relevant information from this shared information source. This option works only when there is common information that is of use by multiple subscribers (e.g. product, symbol or catalog information) but not for information that is used by just 1 or 2 parties (e.g. orders, invoices). The GS1 driven global product data hub is a leading example of how a shared database can be used to solve certain B2B integration requirements. You still required all the businesses which needed to update or read this shared database to support the common API exposed by the shared database. So standard based messages were still needed.

Thus, B2B integration based on standard message libraries + messaging hubs power global supply chains.

Now coming back to the pith of my rant – do any of the emerging technologies challenge message based B2B integration described above?

Lets get the easy ones off the list first.

Social does not really touch this problem space at all. Social has a lot of significance in the B2C space – with social platforms providing a new medium for business and customers to interact with each other. However two businesses needing to exchange information to minimize inventory – will not use social media to exchange information. Not the way things are today.

BigData – again really has nothing to do with the B2B integration problem. Big Data deals with the challenges arising while processing large data sets resulting from internet scale applications and automated devices. It does not deal with specific near-real-time information exchange between partnering business entities.

Mobile – again similar to Social is another channel of communication between businesses and its consumers. Mobile platforms will not be used by business partners to exchange structured information. However it is possible that certain tighly integrated business partners will use mobile and the IoT (internet of things) to get real time visibility into each others business processes. But this will be true only for a very small & insignificant fraction of business partners who are using message based B2B integration. Atleast in the next few years.

SOA & Web Services – The general impression is that Web-services based SOA solves all integration problems. Let us put it in context of the B2B integration problem. Assume every business has its SOA WS-* interfaces. For my business to integrate will all my partners I will need to be able to support the WS-* interfaces of all my partners. If each partner has its own unique WS* interface I will need to transform request and responses from my IT systems to the formats defined by the WS-* interfaces of each of my partner businesses. My business will need to develop and maintain 100s and 1000s of transformations. So the intial problem that was solved by B2B messaging remains un-addressed. However with a strong SOA backbone in place you can unlock more enterprise functionality for all kinds of uses including information sharing with your trading partners. So SOA does help B2B in that way.

Finally, Cloud – At some level one might feel that cloud eliminates the need for B2B integration. If every business’ IT system is in the cloud, then theoretically there is no need for businesses to speak to each other directly. All communication happens in the clouds. Unfortunately there is not one cloud. We have public, private and hybrid clouds. We have SAAS, PAAS and IAAS too. If you are doing PAAS or IAAS then your business IT still has its own applications to manage and thus is responsible for integration with other businesses. You still need messaging standards to do that integration. The integration may be hosted on the cloud, but it still needs to be built and maintained. No pennies gained in solving the B2B integration problem.

However if your business uses SAAS for its IT. Then the SAAS provider can provide pre-buit integration support for a lot of popular B2B messaging standards. So you can just sign-up for a transaction/usage based payment model and have your IT enabled for B2B integration. However the pre-condition here is that your SAAS provider supports the B2B message standards that your partners require for integration. (it really does not matter whether your partners are running their IT in-premise or on the cloud.)

In case your SAAS provider does not support the required B2B messaging standards (and this is mostly likely the case), then your SAAS provider will need to partner with a B2B messaging hub which provides standards based message transformation as a service. So your SAAS instance can be configured to exchange information with the B2B messaing hub, which in turn can do the heavy-lifting of integrating with your business partners using standards based B2B messaging.

So yes, SAAS + B2B messaging hubs do offer a disrputive alternative to standards based B2B integration in its current form. However till a consolidation of SAAS vendors and B2B hubs happen, and while majority of business continue to rely on in-premise IT, standards based messaging such as EDI and various other XML business messaging standards will continue to drive B2B integration.

Posted in Uncategorized | Leave a comment

Eucalyptus – web service based implementation of cloud infrastructure

Came across Eucalyptus – Elastic Computing Architecture for Linking Your Programs to Useful Systems, a cloud infrastructure software that allows you to build and manage IaaS (Infrasture as a Service) cloud for the enterprise.

Eucalytus itself exposes its functionality (i.e. cloud management and control) via a web based interfaces thus allowing the applications and users using the cloud to seemlessly manage with the cloud infrastructure based on varying business needs.

The product is open source and allow you to build Linux image based clouds on top of common virtualization options.

Also check out this presentation on IaaS by Eucalyptus. A very useful insight by the founders of Eucalyptus on the current state of cloud computing with emphasis on private clouds, IaaS and how Eucalyptus addresses the needs.

 

Posted in Uncategorized | Tagged , , , | Leave a comment

Technorati – Blog Verification

Technorati blog verification claim code 7ZHDCUYKZJFT

Posted in Uncategorized | Leave a comment

Open source SOA registry – Membrane

I came across an open source SOA Registry named “Membrane” http://www.membrane-soa.org/soa-registry/

Membrane allows you to

  1. Register services by providing the hosted WSDL
  2. Allows definining of dependencies among services
  3. Checks for WS-I compliance of the service
  4. Periodically checks for availability of the service and raises events on availability changes
  5. Checks for changes in WSDLs or underlying XSD – and raises events when changes are detected
  6. Maintains service availability statistics
  7. Stores a copy of the service metadata such a WSDL, XSD
  8. A SOAP client which allows XML and form based inputs to a WS
  9. Support for rating and tagging of the web-services

A couple of really cool features that I really liked were

  1. WSDL comparison
  2. XML Schema comparison
  3. Automated checks and events for WSDL – XSD changes
  4. Tagging of services
  5. RSS feed for events (new/changed/available/unavailable services)

The features gave a rich-text output of differences between any two services/versions and thus could be used to evaluate impact of changes to the service definition.

Few key registry features were  missing were

  1. No UDDI support (no service key based lookup etc)
  2. No ability to search for web-services based on name/description etc (I could not find it – perhaps there is a way to do it)
  3. Not ability to categorize services. (Some may argue that tagging serves the same purpose and in fact be a  better approach!)

There is a publicly hosted instance of the SOA registry at http://www.service-repository.com/

Events detected by Membrane

 

 

So for project teams and organizations which want to get started on some kind of SOA governance and are looking out for a basic Service/SOA registry, this may be an option to get started. (instead of going the wiki/spreadsheet way). This would be especially useful in large development projects which need a internal SOA registry to keep track of the services being developed and help various internal teams track changes to service definitions etc.

Since the tool itself is open-source it perhaps can be customized / extended to allow association of additional meta-data with the services as required.

There are other products such as the Membrane ESB which can be combined with the Membrane SOA registry to achieve service virtualization.  http://www.membrane-soa.org/esb/

 

Posted in Uncategorized | Tagged , , , , , , , | 2 Comments

New year wishes!

Wish you all a very happy new year 2012.

Wikipedia has some interesting notes about the year. http://en.wikipedia.org/wiki/2012

Apart from the possibility and belief that the world as we know may come to end (2012 phenomenon), it also happens to be the year dedicated to Alan Turing, the father of computer science and has been dedicated to Srinivasa Ramanujam, the eminent Indian mathematician, by the Indian Government.

So if you are computer scientist or a believer of doomsday prophecies there is a lot to look forward to in the year ahead :)

Enjoy!

Posted in Uncategorized | Leave a comment

Use JAX-WS message handlers to address cross-cutting concerns: Caching

SOA adoption is often accompanied by the recognition of the need for a revamped approach for handling cross-cutting concerns such as logging, diagnostics and monitoring, security, error detection and reporting, performance etc.  All the shiny new services being developed on the latest middleware stack either implement these revamped standards for cross-cutting concerns or rely on a policy based service bus to implement some of these concerns. It is generally considered a best practice that the business-method / logic should be separated out from the code and configuration that handles common cross cutting concerns.

For Java webservices developed by exposing business functionality written in Java beans via JAX-WS web-services the SOAP message handlers are a useful place to plug-in a lot of common processing associated with service invocations.

This java world article http://www.javaworld.com/javaworld/jw-02-2007/jw-02-handler.html has a very good introduction to this feature.

In this post, I will provide you some sample skeleton code which shows how SOAP message handlers can be used to cache service invocation responses on the server-side and return cached results by-passing the server invocation when available.

Assume we have a  Customer getCustomerDetails(Customer customer) method which excepts a customer object with a customerId or name as input and returns a Customer object with the details of the customer fetched from a database.  Assume we have a cache named CustomerCache which has static methods isAvailable(String), Customer get(String) and add(String, Customer) which can provide faster access to cached customer data. We now want to add additional caching logic on the server-side to first look for the request customer details in the cache and return it to the client and invoke the service endpoint only if the customer is not in the cache.  Also whenever the service endpoint is invoked (customer details not found in cache), the returned objected by the endpoint has to be added to cache for future requests.

To achieve this we implement a logical message handler (javax.xml.ws.handler.LogicalHandler)

Image

We implement the handleMessage method of the logical handler to delegate the request and response processing to utility methods.

Image

In the handleRequest(LogicalMessageContext) method, we check if the requested service result is present in a custom cache. If yes, we set a message property to a key-value of the cache and return false to skip any additional processing of the request. As a result the actual service endpoint is never invoked.

Image

In the handleResponseMessage(LogicalMessageContext) method, we check if the message property  with cache key is set by the request handler. If yes, we know that the service endpoint has not been invoked. So we lookup the object from cache and modify the response payload to include this object. However if the message property was not set by the request handler, we read the object from the response message (coming from the endpoint now) and add it to the cache. So that future requests of this objects can be returned from the cache.

Image

Thus using message handlers additional message processing logic can be added to a service and the actual service implementation only deals with the business logic.

Posted in Uncategorized | Tagged , , , , | Leave a comment

Semantic web-service descriptions using OWL-S

In this post http://buddhiraju.wordpress.com/2011/12/03/xsdannotations/  I described some of the problems with WSDL as a language to describe web-services. The key problem being that WSDL is a specification with focus on describing a web-service interface rather than the semantics of the web-service. A WSDL typically describes a set of operations, their input and output message structures as well as the messaging protocol(s) and address(s) / endpoint(s) to be used for accessing the service. However WSDL itself does not have a formal way of describing the “meaning” of these web-service operations.

The most common ways to associate a meaning with WSDL is using

  • Annotations / descriptions on the WSDL
  • Well-defined naming conventions for services, messages, operations and their namespaces
  • SOA governance tools which support enrich WSDL interfaces by additional categorization, tagging
  • Product and service portfolio documentation links

The above solutions work fairly well for a user to lookup a portfolio of web-services, read their descriptions and choose a service that matches his requirement best. For example a user can navigate through the “Travel > Car Rental > USA” categories in a web-service catalog to find multiple car-rental web-services. The user can easily guess what functionality a “CarRentalService”  from “http://www.avis.com” is likely to provide. The user can drill down into the service operations and identify the purpose of “createBooking”, “cancelBooking” operations. The associated XSD  and embedded annotations will describe in detail the inputs and outputs and sample code will help the user get started in developing client-code for the WS.

However, as the semantic-web gain momentum, there is a need for user-agents / bots to perform intelligent actions on Web on behalf of the user. These software agents should be able to discover and invoke web-services as easily as users can. However ability to discover service by navigating through arbitrary category hierarchy or by guessing the service functionality from the name of a service and confirming its behavior by looking up supporting unstructured documentations content – are tasks which are as difficult for a bot as they are easy for a human.

The Semantic-web advocates that the content generated and added to web be annotated with additional machine friendly meta-data which can associate a meaning to the generated content. RDF, OWL along with XML are building blocks for the semantic web. http://en.wikipedia.org/wiki/Semantic_Web

OWL-S is ontology (read a formal description) of a web-service built using the Web Ontology Language (OWL). OWL-S can be used to describe what a service offers, how the service is used  as well as the details about interacting with the service using the ServiceProfile, ServiceModel and ServiceGrounding properties of the OWL-S Service class.  The ServiceGrounding which specifically deals with the service interface/interaction mechanism can in turn make use of a WSDL for the same http://www.w3.org/Submission/OWL-S/

Thus a service provider can create OWL-S descriptions for web-services along with WSDLs for capturing the concrete service interface details. These can potentially be used by user-agents and bots to dynamically discover, bind and invoke web-services without any user intervention.

Free and open-source tools for creating OWL-S http://www.ai.sri.com/daml/services/owl-s/tools.html

Currently OWL-S is still a submitted W3C proposal. There seems to be no real adoption of  this standard in the industry by any major commercial vendors or open-source frameworks. Most interest in the standard is in the academic world. And also there is the Web Service Modelling Ontology (WSMO) is an alternative semantic model for describing web-services.  http://www.wsmo.org/2004/d4/d4.1/v0.1/20050106/d4.1v0.1_20050106.pdf

It could be several years before any of these efforts result in some kind of standard. However as developers of web-services it is extremely important to be aware of the need to think beyond the WSDL to make it easier for your web-services to be discovered and used by humans and machines alike.

Posted in Uncategorized | Tagged , , , , , , | Leave a comment

Think REST – GET can be a very powerful verb

The basic idea about REST is to think of your system as a collection of resources (and their representations) which transition on one state to another when acted upon by messages.

REST is a architecture style which applies constraints such as separation of concerns between client and server, support for caching, statelessness, support for multi-hop/redirection of requests etc.   The key driver for these constraints is to ensure scalability of the system.

The internet of hypertext documents is a typically RESTful system. With multiple components interacting with each other using HTTP messages which result in transfer, exchange, modification of hypermedia.  (Not all web-applications are RESTful since they may not adhere to one or more of the architecture constraints e.g. web apps which use cookies are not stateless).

HTTP offers a set of verbs (GET, PUT, DELETE, POST) with universal semantics which can be used to perform operations on resource-representations (URIs) present on the WWW.

A common interpretation of REST when in building web-services is to make the entities being exposed by web-services e.g. Orders, Customers, AddressBooks identifiable using URIs and to map GET, PUT, DELETE, POST to common CRUD operations e.g. Query, Update, Remove, Create/Replace on these resources. This works beautifully when you expose data-services as RESTful APIs.

However this interpretation is non intuitive when you  apply REST to non CRUD WS operation such e.g. calculateTax. In this case calculateTax is not a resource. It is a function which acts on a input and produces a output. Whose representation is getting transferred here? What is its URI?

One way to look at it is, consider a resource whose state is the calculated tax for the input user. The calculateTax service produces this resource and returns it to the user. The resource is transient and is not persisted after it is transferred to the client.

Thus the service / API below

  • TaxCalculation calculateTax(IncomeDetails incomeDetails)

Can actually be modeled as a GET request for TaxCalculation resource specific to the input IncomeDetails.

GET http://taxcalculation/income=5000&state=CA&userID=x123

Can GET always be used for modeling any idempotent function which does not have any side-effects on the system?

Posted in Web services | Tagged , , , | Leave a comment

Criticism of AMQP

One of my previous blog entries was about AMQP http://buddhiraju.wordpress.com/2011/11/26/amqp-advanced-message-queueing-protocol/ a new wire-level messaging protocol.

Today I came across a very thorough criticism of the protocol by one of its authors http://www.imatix.com/articles:whats-wrong-with-amqp

A very long article, but of key points being made against the AMQP protocol were

1. It was too complex

2. It is too complex due to unnecessary reliability requirements

3. It is architecturally flawed since it requires a central messaging server

4. 100% binary protocol  was a bad design choice

5. There are issues with the functioning of committee itself.

6. There is too much emphasis in making it like JMS or JMS compatible.

The article does a very deep architectural evaluation of the protocol and suggests corrections to fix it. The alternative is to move way from the central server assumption to a more distributed P2P model.  While I do not have enough experience to comment on which of the approaches, the article makes really interesting reading.

| Tagged , , | Leave a comment

Examples of xsd:appinfo

Continuing from my previous post, http://buddhiraju.wordpress.com/2011/12/03/xsdannotations/ here a few real-world usecases where the xsd:appinfo is  put to good use.

For example, using xsd:appinfo you can override the default XML Schema to Java conversion logic applied by JAXB.

<xsd:simpleType name=”ZipCodeType”>
<xsd:annotation>
<xsd:appinfo>
<jxb:javaType name=”int”
parseMethod=”primer.MyDatatypeConverter.parseIntegerToInt”
printMethod=”primer.MyDatatypeConverter.printIntTo Integer” />
</xsd:appinfo>
</xsd:annotation>
<xsd:restriction base=”xsd:integer”>
<xsd:minInclusive value=”10000″/>
<xsd:maxInclusive value=”99999″/>
</xsd:restriction>
</xsd:simpleType>

JAXB infact uses XML schema appInfo elements extensively to control the Java XML mappings. More info here – http://docs.oracle.com/javaee/5/tutorial/doc/bnbbf.html

Microsoft SQL server uses appinfo at a document level to convery mapping information between the XML schemas corresponding to the DB schemas. http://msdn.microsoft.com/en-us/library/aa258678(v=sql.80).aspx

<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:sql="urn:schemas-microsoft-com:mapping-schema"> <xsd:annotation> <xsd:appinfo> <sql:relationship name="OrderOD" parent="Orders" parent-key="OrderID" child="[Order Details]" child-key="OrderID" /> </xsd:appinfo> </xsd:annotation>
| Tagged , , , , , , | Leave a comment