Wednesday, December 23, 2009

Looking back on 2009 - What development in SOA am I most thankful for?

Hmmm... and the winner is the proclamation that "SOA is dead!"by Anne Thomas Manes, a Research Director with the Burton Group, in her blog entry "SOA is Dead; Long Live Services", which has actually resulted in a revived interest and focus on SOA by sparking much needed debates, soul-searching, and introspection.

* Originally posted in the ebizQ SOA Forum on December 23, 2009

Monday, November 23, 2009

Hot off the Press - Avoiding the Storms: Why We Need Cloud Governance


Yes, today's forecast is scattered clouds. Scattered clouds imply a nice day with mostly sunny skies and a few scattered showers. But don't let today's sunny skies lull you to a complacent afternoon siesta. An unfettered increase in "scattered clouds" could mean that today's sunny skies are just the calm before the raging storms arrive.

Without proper planning and oversight (i.e. governance), cloud computing will inevitably have the same story as SOA. My latest feature, Avoiding the Storms: Why We Need Cloud Governance, explores this train of thought.

Thursday, November 19, 2009

Is SOA Everywhere?

The title of this post reflects a very interesting question that I came across today on the ebizQ SOA forum where I am a contributing analyst.

Joe McKendrick explains his question as "while some viewed SOA as failed or dead, a different reality may have taken root. That is, everything about IT - developing, integrating, embedding, and modeling - is now done in a service-oriented way, or with service orientation as the goal. Have SOA principles become so ubiquitous they have simply melded into the background?"

As I pondered over the answer, I was reminded of the story of a very talented painter named Zeuxis, who could paint amazingly life-like pictures. Once he painted a painting of a boy carrying a basket of ripe red cherries. When he hung this painting outside his door, some birds flew down and tried to carry the cherries away. "Ah! this picture is a failure," he said. "For if the boy had been as well painted as the cherries, the birds would have been afraid to come near him."

The moral of the story is that if an SOA is done correctly, it should meld into the background rather than sticking out like a sore thumb. Conversely, good SOAs are hard to find not because they don't exist but because they have become part of the fabric of the enterprise.

* Originally posted on the ebizQ SOA Forum on 11/19/2009

Thursday, November 5, 2009

Does cloud computing need malpractice safeguards?

An interesting question raised by James Urquhart's blog on CNET. Urquhart argues there must either be some kind of government regulation of minimum standards for cloud provision, or that customers should be able to bring forward "cloud malpractice" suits.

I respond with a simple question of my own:

"Can we start by enforcing the myriad of regulations we already have before we start thinking of new ones?"

The fact is that many of the existing regulations already apply to clouds. For example, information security on government clouds is still subject to the Federal Information Security Management Act (FISMA), although a few enhancements are being discussed to make the act more amenable (i.e. less bureaucratic) for clouds. Similarly, all of the data privacy (national and transborder) laws still apply to a cloud environment. Thus, IMHO, most of the legal issues surrounding clouds may already be addressed within the context of the existing legal framework reinforced by contractually enforceable SLAs. An evolving body of "case law" may also help address some of the "grey" issues that arise as cloud use becomes more pervasive.

At the same time, I don't think we can completely rule out the possibility of a few new and very cloud-specific regulations, especially if they help alleviate public concern and increase speed of adoption.

Monday, October 26, 2009

Hot off the Press - Eight Myths of Cloud Computing


As Taylor Rickard, chief technology officer of G&B Solutions, so eloquently puts it, “Ask 25 people what cloud computing means and you are likely to get 30 different definitions.” With so much disinformation out there, is it any wonder that there are so many myths associated with clouds? My latest article dispels eight of the most common myths.

Read the complete article here

Friday, October 23, 2009

Hot off the Press - Open Government - Five Key IT Issues


We have barely scratched the surface regarding social media use in the pursuit of an Open Government. The root problem is an "impendence mismatch" between the federal operating environment and the technology -- namely, a federal environment that is still very 20th century and a technology that is very 21st century.

Interested?

Read my latest article Open Government - Five Key IT Issues.

Tuesday, October 13, 2009

Hot off the Press - The Cloud SOA Ecosystem


The union of SOA and the cloud goes beyond a simple convergence – it actually represents an ecosystem. Read my feature article on ebizQ titled The Cloud SOA ecosystem to find out why.

Thursday, October 8, 2009

Can Cloud Defend Against DDoS Attacks?

I just came across an interesting blog entry titled Can Cloud Defend Against DDoS Attacks? on Govinfo Security, an educational portal catering to security professionals in the Federal Government space.

The blog entry makes an intersting observation claiming that:

"...cloud computing services, such as Google's App Engine and Amazon's Elastic Compute Cloud, or EC2, provide flexible hosting resources that can grow to accommodate a surge in demand. Imagine if the agencies that were affected by the [DDoS] attacks had been sitting in the cloud when the malicious traffic started rolling in. The ability to disrupt agency websites becomes a function of how much capacity Google and Amazon have to support the requests. These providers likely have plenty of bandwidth to sustain the attack and provide service with little to no service disruption.

Here's my problem:

Claiming that "cloud computing services, such as Google's App Engine and Amazon's Elastic Compute Cloud, or EC2, have plenty of bandwidth to sustain a DDoS attack" is akin to arguing that "you can tolerate the cold winter better by becoming fatter."

Is the fact that we have more scalabilty even relevant in a discussion about security?

Friday, October 2, 2009

What are Enterprise IT Geeks Obsessed With Today?

I've been swamped at work responding a RFP in which I am writing about security, C&A, CMMI, ISO, and a host of other things. I needed a break when I saw that a new question just popped up on the ebizQ forum:

"What are Enterprise IT Geeks Obsessed With Today?"

LOL... Now how could I possibly answer this question? :)

However, if in some parallel universe, I were an Enterprise IT Geek then I would be obsessed with:

A. Justifying all of the acronyms we have today,
B. Coming up with new and improved reasons as to why all the above are still not enough to create an "enterprise" solution on time and on budget, and
C. A program that generates new, sensible sounding acronyms that I would say are essential to getting what I stated was missing in B (above)

This would be an iterative process in its entirety and in between stages and its implementation would beg, borrow, and steal from the best-of-breed Agile processes (XP, Scrum, etc.).

But then, as I stated before, I'm not a Geek, so what would I know? :)

Enough said... now it's time to get back to work!

Thursday, September 24, 2009

My SOA Elevator Speech

A recent question on the ebizQ SOA forum involved this scenario:

"You're the CIO of a Fortune 500 company and you step into an elevator with your CEO. He asks why we should approve your seven figure SOA budget request. So what's your "elevator pitch" for SOA? Make it short and to the point – the elevator is already rising fast."

So, what would my answer be?

"SOA is the centerpiece of our IT strategy in direct alignment with the board of director mandated enterprise-level risk management initiative that ensures IT continuity, resilience, compliance with regulatory requirements such as SOX, asset protection, and minimization of negative financial exposure."

BTW, I timed my response to 25 seconds :).

* Originally posted on the ebizQ SOA Forum on September 24, 2009

Monday, September 21, 2009

The Economic Downturn and the IT Diet...

Without a doubt, the economic downturn has taken its toll on IT but did all of the technology cost-cutting organizations were forced to perform end up being a good thing? I'll answer this question with a short story...

Once upon a time there was a very obese person... sort of a "super size me" kind of guy. One day, he got a terrible scare with his first heart attack at age 34. Fortunately, he survived, and is now a much fitter and leaner individual at 35 years age. Is that a good thing?

Of course it is!

Well, to play devil's advocate, it would not have been so good if he had just starved himself. But this individual joined a gym, started a regular exercise program that included both aerobic and anerobic exercises, and went on a nutrition-balance diet. In a similar "vein", a "balanced and strategic" technology cost cutting initiative is a good thing too since most organizations were long due for such an effort having become "obese" with the long period of economic prosperity.

Finally, to bring our story full circle, just as a balanced diet and regular exercise is good for anyone to maintain good personal health, technology cost cutting initiatives should be ongoing and continuous as well to maintain good IT health.

* Originally posted in the ebizQ Cloud Computing Forum on September 21, 2009

Friday, September 18, 2009

Is Service Reuse Overrated as a Value Proposition for SOA?

Yes, SOA reuse is highly overrated.

When SOA first started climbing up the hype cycle, it was pushed to developers as a way to increase the "reusability" of their code by making everything into a service. This created at least two problems with the first being a massive service proliferation as developers eager to jump on the SOA bandwagon made everything into a service, which in turn led to poorly performing architectures (not SOA's fault) and huge issues around service management and governance. The second problem was that upper management and the mainstream media also started "drinking the reuse Kool-Aid" served by the bottom ranks.

The fact is that reuse is only a by-product of SOA. Adopting SOA for reuse is like saying that you're working out to sweat (instead of for losing weight, building muscle, or improving overall health). In fact, I would contend that the re-use provided by SOA is only marginally better than what we've had in previous architecture generations. We've gone through libraries and modules in procedure-oriented architectures, objects in OO architectures, components in component-based architectures, and now servces in SOA. A key issue around reuse is not the technology but the identification, segregation, and granularity of "reuse" items - none of which are dependent on what architecture style you are using. The real value of SOA is as a building block of the larger enterprise-level strategy of aligning IT with the business to ensure that IT value is justified, IT supports the business objectives, and IT has an equivalent level of agility to adapt with changing business needs. In some cases, SOA even becomes the catalyst that spurs the IT alignment to business objectives.

In a nutshell, SOA is about strategic IT alignment with business goals and objectives. Service reuse is only an operational - not even tactical - outcome of SOA. A corollary to the discussion is that many bottom-up adoptions of SOA start with lofty reuse objectives and either quickly become disillusioned or fail to demonstrate adequate value to the rest of the organization because their original reuse goals are far from being met. That is why SOA has the best chance of success with a top-down adoption with the long term sight on strategic alignment objectives rather than operational reuse objectives.

Originally posted on the ebizQ SOA Forum on September 18, 2009

Thursday, September 10, 2009

Do You Think the Pervasive Use of Cloud Computing Will Expand or Contract the Use of SOA?

Peter Schooff, Managing Editor at ebizQ, asked this question on the ebizQ SOA forum today.

Most SOA and Cloud practitioners have probably wondered about this very same thing. I definitely have asked myself this question - albeit in different forms - many times over the past couple of years. A while back (2007) I had written an article in ebizQ titled Leveraging Synergies: RTI and SOA Unite in which I talk about using RTI (think an internal, private cloud) and SOA together for two main reasons:


  1. They share similar objectives
    Both RTI and SOA seek to maximize ROI albeit at different layers of the technology stack; RTI at the infrastructure (hardware) layer and SOA at the business application (software) layer.

  2. They complement each other
    Using one concept (SOA or RTI) without the other leads to an impedance mismatch between the infrastructure and application layers resulting in suboptimal benefits realization.

The same logic applies to application architecture targeted to cloud environments. In fact, you could say that the cloud is the "A" in SOA :). With clouds, SOA has a winning partnership such that if there were ever any doubts about the long-term strategic benefits of SOA, clouds help alleviate most if not all of them.

Originally posted in the ebizQ SOA Forum on September 10, 2009.

Wednesday, September 9, 2009

Are Private Clouds an Abomination?

This is the question posted by Phil Wainewright in the Web 2.0 ebizQ forum where I am one of the panelists. Here's my answer:

A "private" cloud is just as much of a cloud as any other.

The typical private cloud is a cloud in which the cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on premise or off premise. Since all consumers of the private cloud are “trusted” (i.e. within the organization’s legal/contractual umbrella such as employees, contractors, & business partners), multi-tenancy, which is a big deal in the public clouds, becomes less of an issue.

However, a private cloud still satisfies the essential characteristics of what a cloud is.

NIST, for example, defines five essential characteristics of a cloud: On-demand self-service, Broad Network access, Resource Pooling, Rapid Elasticity, and Measured Service. Now consider the Cloud Security Alliance, which also defines the cloud in terms of five principal characteristics: Abstraction of Infrastructure, Resource Democratization, Services Oriented Architecture, Elasticity/Dynamism of Resources, and Utility model of Consumption & Allocation. Every other authoritative definition that I have come across defines the cloud in the context of similar essential characteristics. There is no reference to ownership of resources, sharing the cloud between organizations, competitors, across national or international boundaries, or the use of the public Internet as the backbone. It is because they are not essential characteristics of a cloud.

So, no, a “private” cloud is not an oxymoron. Would "public" cloud providers like us to believe so? Sure, but the fact is a private cloud is a necessity at times due to organizational maturity/capability/culture constraints, regulatory requirements, or SLAs that cannot otherwise be met.

Originally posted in the ebizQ Web 2.0 Forum on Septmeber 8, 2009.

Wednesday, August 19, 2009

Is Cloud All Hype?

Gartner's latest hype cycle diagram shows cloud at a peak of hype. Is there any substance to the technology or is it just smoke and mirrors?

Cloud Computing is definitely not just all hype. Are there unreal expectations related to the "cloud"? Sure, there are, which is not unlike with any other "star" technology such as SOA. The "tending to infinity" number of definitions of a cloud is not helping; nor are all the vendors scrambling to reposition their products and services as being what "cloud" is all about. But, when you cut through whatever hype there is and dispel the myths you end up with a business and technology model that is rooted in best practices and a solid track record. The underpinnings of cloud are not new. Virtualization, SOA, Utility and Grid Computing, and the various "-as-a-platforms" have been around in one form or another for a while; each with success stories of its own. Keeping that in mind, "cloud" is simply the next stage in the continuing maturity evolution of our use of technology to achieve business goals in the most efficient and cost effective manner. Cloud Computing is definitely real and is here to stay for the long haul.

* Originally posted in the ebizQ Cloud Computing Forum on August 19, 2009.

Tuesday, July 21, 2009

Where Do you Think Web 2.0 Applications Are Going Next?

With so much buzz happening around Facebook, LinkedIn, and Twitter, what do you see will be the next wave for Web 2.0?

That's what I've been researching these past few days. The answer to me seems to be a combination of what is referred to as Web 3.0 and Web 3D.

Web 3.0 is taking the current Web 2.0, which focusses on building relationships and social networking, into the next phase of converting the "relationship" data into knowledge that can then support "context sensitive" and "semantics based" searching. In the ideal Web 3.0 world, the browser would act like a personal assistant. As you search the Web, the browser learns what you are interested in. So, the more you browse, the more it learns about you until eventually it can confidently answer questions such as "where should I go for lunch?" or "what movie should I watch?".

Web 3D combines virtual reality elements with the persistent online worlds of massively multiplayer online role-playing games (MMORPGs) transforming the Web into a digital landscape that incorporates the illusion of depth. In a Web 3D world you would navigate the Web through a digital representation of yourself called an avatar.

Ultimately, I think we'll see elements of both, neither 100%, in the future of Web 2.0. Also, I strongly feel that the Web will continue to merge with other forms of entertainment until all distinction between media are lost to the point that radio programs, television shows, and movies all rely on the Web as a delivery system.

* Originally posted in the ebizQ SOA Forum on July 21, 2009.

Thursday, July 16, 2009

Do You Believe that the Quality of SOA Related Projects Will Improve or Decline in the Future?

There is a lot of talk about whether SOA is dead and, if not, then will the number of SOA projects increase in the future. But SOA project 'Quantity' is only half the story. A truly balanced discussion about SOA requires examining SOA project 'Quality' - the other side of the 'Quantity' coin. So, the question is: Do You Believe that the Quality of SOA Related Projects Will Improve or Decline in the Future?

Find out how industry experts are responding to my question on the ebizQ SOA forum...

Thursday, July 9, 2009

How Will SOA Vendors Adapt to the Emerging Cloud Paradigm?

Has anyone asked you whether "SOA is dead" recently?

SOA alongwith other technologies such as Virtualization, Grid Computing, Broadband Networks, Open Source Software, Web 2.0, etc. form the foundation for Cloud Computing. We all know how hot Cloud Computing is being in the very early stages of its hype cycle. So it stands to reason that SOA vendors will very likely jump on the Cloud Computing bandwagon in order to marginalize any discussion about the relevance of SOA. After all, as they would say, how can SOA be dead if it forms a foundation for the "new" best thing since sliced bread - Cloud Computing?

* Originally posted in the ebizQ SOA Forum on July 9, 2009.

Tuesday, June 30, 2009

Are Social Networks Working for Businesses?

Social networks definitely get their fair share of hype but are they actually providing any true business value?

Assessing the business value of Web 2.0 is similar to assessing the business value of an analytical (OLAP) database in that the real value of an OLAP is not in the database itself but rather in the "knowledge" gained by analyzing the "patterns" of data within the database. Web 2.0 is the philosophy and associated web-based tools (such as twitter, linkedin, facebook, myspace, orkut, etc.) of creating a more "personalized", "humanizing" web experience. Just as in the case of an OLAP database, the real business value of Web 2.0 is neither in the philosophy nor in the toolset but in the "relationships" and personal information captured about the users that was previously unavailable in Web 1.0. Companies across the world are burning midnight oil trying to decode all of this new data about online relationships hoping to strike gold with profitable insights. Companies such as Facebook have grandiose visions about changing the way people interact over the web and even displacing Google as the "search engine" of choice. Decoding these online relationships in the sea of data within the depths of Web 2.0 could be just the keys needed to unlock unheard of riches!

* Originally posted on ebizQ Forum on June 30, 2009

Friday, June 26, 2009

Is There Such a Thing as "SOA-in-a-Box"?

Given that SOA is an architectural style rather than a product, SOA-in-a-box might be an extreme but an SOA framework can come very close to a "box" if it meets the following criteria:

  1. Provide a comprehensive methodology from inception to retirement. The methodology should describe the method for defining an SOA in terms of a set of building blocks and
    show how the building blocks fit together.
  2. Provide a common vocabulary
  3. Include a list of recommended standards
  4. Provide guidance on implementation patterns
  5. Provide a comprehensive set of "starter" models, templates, samples, best practices, and project management related artifacts.
  6. Contain a prescriptive recommendation of a set of tools and compliant products that can be used to implement the building blocks.

* Originally posted on ebizQ's SOA Forum on June 26, 2009

Sunday, June 21, 2009

Is SOA Being Employed as a Tool for Streamlining or Cutting Budgets, or is SOA a Victim of the Budget Knife Itself?

My opinion: "SOA is not even a factor in the larger discussion of budgets."

Budget decisions are made based on strategic, operational, and tactical objectives - in that order of preference. SOA really should not play a role in that decision as it is the means to an end. That is, SOA is the means to achieving IT alignment with the business with the end goal being becoming more "agile" in meeting the business objectives. So, the budget decision might be to marginalize the priority of a certain set of business objectives, which in turn might nix an SOA initiative. Contrarily, as other business objectives gain priority, SOA should be a consideration in the implementation of those objectives. So, the nutshell is that whether or not SOA gets used as a tool to stream operations or gets the knife itself is really a "side effect" of business level decisions related to long-term goals and objectives.

* Originally posted on ebizQ's SOA Forum on June 21, 2009

Saturday, June 13, 2009

Hot off the Press... Silver Line Your Cloud with 6 Strategic Considerations

"Entrusting your company data to the cloud is a serious commitment. Here are six considerations to keep in mind when deciding if a cloud vendor is worthy of that commitment."

Read more...
Silver Line Your Cloud with 6 Strategic Considerations

Friday, June 5, 2009

What's Killing BI in the Enterprise?

My answer to this question is one that would apply equally to many other enterprise level initiatives such as SOA -- an obsessive focus on technology. Yes, technology is ultimately the vehicle through which the initiative such as SOA or BI is delivered and made available but it is certainly not the end goal. In fact, I would contend that BI and SOA are not end goals either! Rather, the end goal driving at the need for BI is the ability of an organization to be more responsive, proactive, and agile in an ever more competitive marketplace. Having said that, I have seen many an organization where BI and SOA initiatives are driven by technologists, who are much more focused on tools and technologies (and the related concerns of scalability, availabilty, etc.). The end goals are almost forgotten and the desired benefits never materialize. So, technology-based "tunnel vision" is what I think is the leading cause of death for BI projects in the Enterprise.

* Originally posted on ebizQ's SaaS Forum on June 5, 2009

Monday, June 1, 2009

Should SaaS Vendors Offer On-Premise Options?

The answer to the question as posted, in my opinion, is "Why not?" It's like asking "Should Ford's Model T have been made available in colors other than black?" I think the real question here is "Can the SaaS Vendor Offer On-Premise Options?" The answer to this question is much more involved as it depends on the vendor's business model and long-term vision/strategy, what exactly the SaaS vendor is offering, the variability and configurability of the offering, and so forth. But even if the SaaS vendor could somehow package up an "on site" version of its offering, could the client "afford" it with the required initial capital investment, ongoing support expenses, and appropriately skilled resources?

* Originally posted on ebizQ's SaaS Forum on June 1, 2009

Sunday, May 31, 2009

Do You Believe SOA Related Projects Will Increase or Decrease in the Future?

The answer is a resounding "YES". SOA has been around in one form or another well before the acronym SOA was coined. The goal of well-meaning architects has always been to align business and IT, increase business agility, and establish IT value/credibility with robust user-oriented solutions. SOA is the latest name given to that admirable, always slightly out-of-reach goal. The same goal will be called something else later but the principles of SOA will stand. The architecture pattern embodied in SOA has stood and will continue to stand the test of time.

A related question though was anwered previously on the ebizQ SOA forum related to the Biggest SOA inhibitors. So, while I am confident that the number of SOA projects will increase in the future, the rate at which that will occur depends on how well these inhibitors will be overcome.

Another question that comes to mind relates to the never ending debate between "Quantity" and "Quality". While numbers are important, they don't tell the whole story. Are we better off if the number of SOA projects goes up while the success rate (aka quality) does not or gets worse? Yes, thinking about quantity is important, especially as one embarks on new initiatives and wants to ensure that they will not fall behind the times even before their initiative is over but remember that any discussion about quantity must be balanced with a corresponding discussion on quality.

So, I end this post with the question:

Do You Believe that the Quality of SOA Related Projects Will Increase or Decrease in the Future?

* Originally posted on ebizQ's SOA Forum on May 31, 2009

Wednesday, May 20, 2009

What Do You Believe is the Biggest Inhibitor Today to SOA Adoption?

The inhibitors to an SOA implementation are many and just as varied as there are benefits to an SOA implementation. And while some inhibitors may be more universal than others, which inhibitors affect a given organization is often dependent on the organization itself.

Here are six key inhibitors that I have experienced over the years:
  1. Unrealistic or misunderstood goals and expections— As with any substantial project, level-setting expections and gaining a common understanding of the goals/benefits amongst the stakeholders is key to the success of an SOA implementation.

  2. Tactical instead of Strategic Approach— SOA is best implemented top-down with a strategic view of the business processes and functions to be exposed as services. Bottom-up implementations too often tend to be overly focused on technoology rather than the business

  3. Not putting your money where your mouth is— Let's be realistic: The best intentions won't go too far without money being spent. SOA requires an investment in your resources (people, tools, and technology) to be successful. There is no free lunch in life.
  4. Poor Governance— Poor or complete lack of governance can lead to service proliferation, which can ultimately lead to more severe problems with data integrity, meeting SLAs, and privacy/security breaches.
  5. Organizational Power Struggles— In an SOA the focus moves away from individual applications and data stores to more business-oriented services that may cross business unit and organizational boundaries. This breakdown in application and data barriers may cause heartburn to the owners of those applications and data, who may see this as a challenge to their power see advantage in hindering the SOA implementation.
  6. Fear and Ignorance— SOA is surrounded by many misconceptions regarding exactly what it is and what it takes. Too much has been said about all the technical details/variations leading to the misguided belief that any SOA implementation must have "all that" technology and complexity. Ignorance breeds fear, which then becomes a major inhibitor.
But these are just six of the inhibitors that I have seen; there are many more. Remember, each organization is unique in its own way and the challenges it will face as it implements an SOA will depend on its unique culture and history.

* Originally posted on ebizQ's SOA Forum on May 20, 2009

Thursday, May 14, 2009

Hot off the Press... What's in it for IT?

With no end in sight to the current economic downturn, a burning question for IT professionals has become “Is Obama’s strong penchant for and his belief in the transformational capability of technology evident in how funds are allocated in the stimulus package?” Being an IT professional myself, I too sought out to seek that answer.

Follow my journey to seek the answer in my latest article on "What Obama's Stimulus Plan means for IT" in the Gameplan section (page 10) of the May 2009 issue of Baseline magazine.

Tuesday, May 12, 2009

What Impact is Web 2.0 Having on Marketing?

In my opinion, the most significant change that Web 2.0 technology has brought about to the marketing world is making marketing much more of a 2-way street than ever before. Traditionally, marketing has been about pushing information to potential consumers, telling them how great your products are, and why they should buy your product. Gathering information from them, especially pro-actively, has always been a challenge. Web 2.0 is the game changer that allows companies to pro-actively involve potential and existing consumers in marketing-related activities from product development to feedback to customer service. As an example, consider wikis - web sites that allow users to add, delete and edit content - where employees and consumers alike can answer frequently asked questions about each product. Many companies have also engaged in online community building through the use of sites such as Facebook, MySpace, and Twitter. Such online communities not only bring consumers closer to the company and its products but to one another and create the feeling of loyalty through an extended family. However, as with everything else in life, one Web 2.0 strategy should not be expected to fit all; companies must try different things to find out what's best for its environment (product, company, and consumer space).

* Originally posted on ebizQ's SOA Forum on May 12, 2009

Thursday, May 7, 2009

How Can Small to Medium Sized Enterprises Benefit from SOA?

Big companies with big IT staffs are often the first to realize the benefits of IT projects like SOA. But what about small to medium-sized companies with a much smaller and often overworked IT staff? How do they stand to benefit from SOA in these tough times?

The primary benefit of an SOA to small and medium sized businesses (SMB) is exactly the same as it is for a large sized company - business and IT alignment. In other words, the value potential of an SOA, as an architectural style, is not tied to the size of a company. Having said that, the next question that might come to mind is "Are SMBs ready for an SOA?" They absolutely should be ready and my guess is that if they think they are not then it's probably because they have been misguided as to what SOA really is. SOA is not about Web Services and the hundreds of associated WS-* standards; nor is it about deploying a mandatory Enterprise Service Bus and encapsulating every single business interaction as a BPEL workflow. SOA is not about technology; it's about optimizing your business by aligning your IT capabilities with your current and anticipated business needs. Once you abstract all that technology out and think about SOA with your business hat on, you'll quickly see that SMBs have as much to gain with SOA as any other sized company.

* Originally posted on ebizQ's SOA Forum on May 7, 2009.

Thursday, April 30, 2009

Hot off the Press - TOGAF 9 Applied: One Iteration at a Time

TOGAF Version 9 came out with a bang on February 2, 2009. Although the core of TOGAF -- the Architecture Development Method (ADM) -- remains the same, there are many changes within the framework making TOGAF even more modular and providing further standardization, guidance, and support around how the framework is applied in practice. Key enhancements include the addition of the newly defined Architecture Content Framework making TOGAF into a truly standalone framework and a detailed set of guidelines and techniques for applying the ADM in a number of real-world scenarios. Another major change is that TOGAF 9 has eliminated the Resource Base transitioning much of it to the newly introduced Architecture Capability Framework. Portions of the Resource Base have also been moved to the relevant TOGAF sections. For example, the complete discussion on Business Scenarios, which was formally part of the Resource Base, is now its own chapter in Section III: ADM Guidelines and Techniques of the TOGAF 9 specification. In this article, I will focus on one particular enhancement to the TOGAF 9 framework -- the formalization of iterative application of the ADM (and hence TOGAF).

Check it out at http://www.ebizq.net/topics/soa/features/11247.html

Thursday, April 23, 2009

What's the Best Way to Measure SOA Success?

The Free Dictionary defines success as "the achievement of something desired, planned, or attempted." Success in an SOA initiative follows the same definition. SOA, in its purest form, is meant to align business and IT towards a utopian agilily. Real-world SOA projects typcially have more short term goals in mind such as reusability, maintainibily, interoperability, etc. Regardless, the success criteria of any SOA initiative should be defined upfront based on the goals of that initiative. It goes without saying that proper care should be taken that all goals and success critera are SMART and agreed to beforehand by all stakeholder in unambiguous terms. Ultimately, just like beauty, I think success, too, is in the eyes of the beholder.

Tuesday, April 14, 2009

Hot off the Press - Enterprise SOA: Five steps to the next frontier

What do enterprise architecture, virtualization, security, business intelligence, and organizational culture have in common with each other and with SOA? If you answered "very little to nothing at all," then think again because each one of these can make or break your SOA implementation.

To find out more check out my latest article on creating an enterprise class SOA implementation, which just got published today in IT World.

http://www.itworld.com/soa/66397/enterprise-soa-five-steps-next-frontier.

Wednesday, April 8, 2009

Cloud Computing - Who should define the standards?

Personally, I don't think there is a definitive answer to this question; at least not at this point in time. Whether defined by an individual company or by a group, there are plenty of examples of standards that have failed in both cases. For example, CORBA (POA, IIOP, etc.) defined with the full backing of the OMG never really reached its full potential. On the other hand, Java (and its family) created initially by one company, Sun Microsystems, have become de-facto standards that are now supported by a whole community. Cloud computing is still in its infancy. Yes, it shows great promise, but so did Robotics and AI. Cloud computing still needs to prove itself to be viable beyond a few initial applications. As it proves itself, standards will emerge; some defined by an enterprising company and some by a group. The good news is that we have both going on right now. Companies such as Amazon and Google are leading the way with a solid beginning. At the same time, there are community efforts as well with the Open Cloud Manifesto and Cloud Computing Interoperability Forum. What is certain is that plenty of standards will emerge but only a few will survive. Only time will tell which standards will succeed.

Wednesday, April 1, 2009

Is "Guerrilla SOA" a Realistic Option When the CEO Doesn't Approve Your Budget?

A great question… and one that occurs more often than one might think and most definitely more than it should. In August 2006, I had written an article on SOA Antipatterns in ebizQ. Antipatterns #3 and 4 specifically talk about this exact issue. The antipatterns are duplicated below for convenience:

Antipattern #3: Service Fiefdoms
This is the corollary of antipattern #2 (The "Big Bang" Approach). Here, instead of taking an enterprise view to the SOA transformation, each vertical silo within the company goes off on their own and recreates their applications as services within an SOA. In this case, there is the potential for a lot of duplication of effort due to the lack of an enterprise view. Furthermore, this fragmented approach to the SOA transformation often fails to create reusable organizational assets, which is one of the key benefits of undergoing the transformation that leads to higher organizational efficiency and improved cost effectiveness.

Antipattern #4: Technophilia
This is an antipattern that occurs when the SOA initiative is led bottom-up instead of top-down i.e. when the "techies" are leading the initiative. Instead of the SOA initiative starting with the study of the processes that drive the business, the technology that will power the SOA becomes the apex of the effort. Instead of representing the business, the drivers of the SOA initiative get too wrapped up in technology specifics such XML, Web Services, determining which versions of the various technology standards to use, deciding how much BPEL will be needed, etc. Ultimately, such an SOA initiative does not yield the intended business benefits of creating reusable organizational assets.

The bottom line is that several years of experience has shown that for an SOA initiative to truly achieve its objectives it has to be a top-down effort; both organizationally and architecturally.

Tuesday, March 31, 2009

Does BI Go Together With SOA, or Should They be Considered Separate Projects?

As I ponder this question of whether SOA and BI go together or not, I am reminded of a concept I learned during my undergraduate studies about process control. Fundamentally, there are two types of processes: Open loop and Closed loop. Open loop processes are those in which the process executes from start to finish based only on the inputs to the process. In contrast, closed loop processes not only take into account the process inputs but continuously observe the process outputs and make dynamic adjustments aimed at improving process efficiency, correcting errors, or both.

As we already know, SOA is an architectural style that strives for business and IT alignment. SOA by itself is an open loop process because it achieves this alignment based only on the current business state and lacks the feedback mechanism to constantly ensure and optimize this alignment once it has been achieved. That is where BI fits in. BI is the broad category of applications and technologies that gather, store, analyze, and provide access to data aimed at helping the enterprise make better business decisions. These “better” decisions are what “close” the open loop SOA implementation by providing the feedback to ensure the continuous alignment between the business and IT. Click here to see the same concept pictorially.

So, although SOA and BI are fundamentally different, they can be very effective together since they both ultimately strive for business process efficiency, albeit in their own way. Now whether they are implemented together in the same program, as separate projects, or as subprojects of one project is purely an implementation decision that each organization must make for itself based on its individual capabilities.

Thursday, March 19, 2009

Do You Think IBM Is Really Going to Buy Sun Microsystems?

Although, the proposition of IBM acquiring Sun seems very attractive on the surface, I don't see the acquisition going through due to problems along two lines: Technology and Culture. Let's take a look at each one:

1. Technology related
Each company has a strong suite of products (Sun with SPARC, Solaris, Java, etc. and IBM with System z, AIX, WebSphere, etc.) with a strong following and customer base. These products are different enough to present serious challenges in creating a unified, consistent technology platform in a combined company.

2. Culture related
By far the biggest problem in the merger of two huge companies is going to be integrating the organizations, people, and processes. Ultimately, a dysfunctional culture in the resulting company might outweigh any potential benefits from synergies.

Personally, I always cringe when competition is reduced by M&A. We're seeing what's going on with the banks becoming too big to fail while paradoxically being too big to manage as well. Would IBM + Sun equate to the same?

Monday, March 16, 2009

Does SOA Increase Security Risks?

SOA is an architectural style that is being used in most modern day system implementations with great expectations. A question that many have, though, is how secure are these SOA-based systems? Are they any better or worse off than their non-SOA counterparts? In my opinion, there are 3 main reasons why SOA based systems might be more insecure compared to their non-SOA brethren:
  1. The first reason is what I call "SOA Security Proximate Cause Syndrome". Proximate cause is a legal term that allows one to link the effect of one action as the cause of another action. Although, there is no written rule that states that SOA systems must be distributed, the fact remains that SOA is the preferred architectural style for complex systems and complex systems tend to be distributed. Distributed systems in turn tend to have a higher "surface area". The more surface area of a system, the more vulnerable it becomes. Thus, the distributed nature of SOA systems becomes the proximate cause of their potential higher insecurity.
  2. The second reason is what I call the "SOA Security Paradox". An SOA is by its very nature designed to be highly flexible, extensible, and maintainable. Now, think about the classic principle of security “Security through Obscurity". There in lies the paradox -- a conflict between the inherent goal of SOA and the implication of this goal on security.
  3. The third reason is poor SOA governance. In the absence of strict governance (design and runtime) SOA systems tend to suffer from service proliferation similar to a virus spreading through its host. These unchecked services often open previously unthought-of of security loopholes. As an example, consider a service that is always called by a client on the extranet through an authentication service. A new "rogue" i.e. "ungoverned" service on the intranet calls this same service without the use of the authentication service. Now, consider what happens if this new "rogue" services is called by the extranet client. Oops! Did we just bypass the authentication service? This simplistic example plays out more often than one might think.

So, is an SOA system inherently insecure? In principle it shouldn't be but our experience in practice has proven otherwise.

* Originally posted on ebizQ Forum on March 11, 2009

KISS Your Web Services

In my post title "WS-Confusion", I talked about the state of confusion that many professionals dealing with Web Service technology are in. Well, that blog entry stirred up some interest and I ended up writing a follow-up article about the issue. The article titled "KISS Your Web Services" is available here.

WS-Confusion?

There’s absolutely no doubt about it… Web Services are hot and are here to stay. XML Schemas, SOAP, and WSDL are all indispensable while working with Web Services. Yes, to some extent (and in some form) UDDI too. And let’s not forget the Security related specifications such as XML Encryption, XML Digital Signatures, and WS-Security, which are quite useful when Web Service boundaries extend beyond the corporate firewall. As a consultant and an architect, I have implemented and audited/assessed complex business software systems that leverage Web Service technology as a core part of their architecture. The specifications that I mentioned above are pretty much all that I have used/seen used. Furthermore, all the Web Services have always been over HTTP/HTTPS. So what about all the other Web Service Specifications such as WS-Transaction, WS-Routing, WS-Reliability, WS-ReliableMessaging, WS-BPEL, WS-Notification, WS-Eventing, WS-AtomicTransactions, WS-Coordination, WS-SecureConversation, and so on? While I agree with the theory of these specifications and that most of them are very well written, my question is: Are we making Web Services more complex than they need to be? I am very interested in knowing if any one of you have used or seen these or other WS-specifications used in real-world (existing) systems?

Client Side Data Validation: A False Sense of Security

Microsoft defines a Web Application (WebApp) as a software program that uses HTTP for its core communication protocol and delivers Web-based information to the user in the HTML language. Such applications are also called Web-based applications. Although one could create a custom client for such an application, most applications will leverage an existing web browser client such as Internet Explorer, Netscape Navigator, Opera, etc. In this blog, I will be focusing entirely on the set of webapps that leverage a browser on the client side.

There are many benefits of creating a web application. A few of them are:
  • The ability to leverage existing communication infrastructure and protocols
  • The ability to leverage existing client side software (browsers) thus reducing the total development time and related costs.
  • Reduced client-side deployment costs. For most webapps the only software required by a client is a compliant browser.
There is no such thing as a free lunch and webapps are no exception. One of the major cons of webapps is the loss of control over the client software and environment. For example, if a webapp is designed for public access then it may be accessed from machines running different versions of the same browser, different versions of different browsers, different operating systems, and different hardware devices (such as kiosks, cell phones, and PDAs). It is also likely to be accessed by a type of user we affectionately refer to as a “hacker”. The primary objective of a hacker is to gain illegal access and control over your webapp and either cause it to malfunction/crash or expose sensitive data.

That’s where data validation and a properly designed validation framework fit in. No, this is not a typo or misprint. Security details such as SSL, certificate management, firewalls, etc. are important, but provide only the icing on the cake. They don’t guarantee that the cake (i.e. your webapp) is baked well. What I mean is that while these “details” may make it harder for a hacker to find holes in your webapp, they don’t seal the loopholes themselves. That is, they only delay the inevitable hacking of your application. Take SSL as an example. A common misconception is that SSL provides web application security. The fact is that it does not. SSL is used only to encrypt the data between the web browser and the web server, and thus prevents eavesdropping. SSL has no knowledge of your webapp and hence provides no security to it.

As a software consultant, I’ve had the opportunity to not only design and implement webapps but to assess/audit many webapps as well. I often encounter web pages within the application that are very sophisticated with lots of client side JavaScript that perform all kinds of checks on the data entered by the user. Even the HTML elements have data validation attributes such as MAXLENGTH. The HTML form is only submitted upon successful validation of all the data entered. The server side happily performs the business logic once it receives the posted form (request).

Do you see the problem here? The developers have made a big assumption of “control” here. They assume that all users of the webapp will be equally honest. They assume that all users will always access the webapp through the browser(s) that they (the developers) have tested upon. And so on. What they have forgotten is that it is very easy to simulate browser-like behavior through the command line using freely available tools. In fact, almost any “posted” form can be sent by typing in the appropriate URL in the browser window; although an easy way to prevent such “form posting” is to disable GET requests for these pages. But there is no way to prevent any one from simulating or even creating their own browser to hack into your system!

The underlying problem here is that the developers have failed to recognize the main difference between client side validation and server side validation. The main difference between the two is NOT where the validation is occurring such as on the client or on the server. The main difference is in the purpose behind the validation.

Client side validation is merely a convenience. It is performed to provide the user with quick feedback. It is performed to make the application appear responsive and give the illusion of a desktop application.

Server side validation, on the other hand, is a must for building a secure webapp. It is done to ensure that all data sent to the server from the client is valid data, no matter how the data was entered in on the client side.

Thus, only server side validation provides real application level security. Many developers fall into the trap of a false sense of security by performing all data validation only on the client side. Here are two examples to put things in perspective

Example 1
A typical “Logon” page has a textbox to enter a username and another textbox to enter a password. On the server side, one may encounter some code in the receiving servlet that constructs a SQL query of the form "SELECT * FROM SecurityTable WHERE username = '" + form.getParameter("username") + "' AND password = '" + form.getParameter("password") + "';" and execute it. If the query comes back with a row in the results then the user successfully logged in, otherwise not.

The first problem here is the way that the SQL is constructed, but let’s ignore that for this blog. What if the user types in a username such as “Alice’--“? Assuming that there is a user named “Alice” in SecurityTable, the user (or shall we call her “hacker”) successfully logs in. I’ll leave finding out why this happens as an exercise for you.

Some creative client side validation can prevent normal users from doing this from the browser. But what about the case where JavaScript is disabled on the client or for those advanced users (hackers) who can use another “browser like” program to send direct commands (HTTP POST and GET commands)? Server side validation is a must to prevent something like what was described above from happening and hence plugging a security hole in the webapp. SSL, firewalls, and the likes won’t help you here.

Example 2
A typical “User Registration” page for a public but limited access webapp includes several textboxes for user identification and authentication information such Name, SSN, Date of Birth, and other relatively uniquely known information. Once the user has proven her identity, she is issued a username and password to access the protected parts of the system. In special cases, the user may be prevented from directly registering and an “Administrative” user (i.e. the administrator) may have to register the user instead. The administrator uses the same registration page, but checkmarks a special checkbox on the page that tells the system (receiving servlet) to bypass all [special] checks and directly register the user. The JSP that renders the HTML page is smart enough only to include the checkbox if the user accessing the page is an administrator.

So far so good? Not unless there is some server side validation in the receiving servlet. The receiving servlet checks to see if the “bypass checks” parameter is present and if it is then it bypasses all special checks and registers the user. But it must also check to see if the logged in user is an administrator. Even though the JSP page did that when it rendered, the receiving servlet must perform the check again. In this case, the JSP check can actually be considered as part of the client side validation. It was merely done for convenience. After all, we don’t want to confuse regular users trying to register with the extra checkbox, since no matter what they select (checked or unchecked), it’s not going to make a difference. This is because regular users do not have the authority to bypass “authentication” checks. Furthermore, it would not be an impossible task for a hacker to figure out what the name of the checkbox was and manually issue a registration request with the checkbox name included in the request (with its value set to “checked” of course). Therefore the receiving servlet must check the identity of the logged on user (if there is a user logged on) and only allow an administrator to bypass special checks in the registration process.

Conclusion
Remember, client side validation is for convenience and server side validation is for security. You must always perform at least as much validation on the server as you perform on the client. All properly designed validation frameworks, such as the Struts Validation Framework, handle this for you. Feel free to leave me a comment and let me know your thoughts…

Model Driven Architecture - Hype Vs Reality

If you’ve been keeping up with your daily doze of buzzwords then you’ve most likely heard of MDA. MDA stands for Model Driven Architecture and is required knowledge for cocktail discussions.

The Concept
The concept behind MDA is certainly not new and is quite simple in theory. In its most rudimentary form, it is the all too familiar code generation that has been offered by leading software modeling tools such as Rational Rose and the Together family of products. These tools allow you to model your system as a series of packages and classes (for Java) and then generate the skeletal code based off of these models. They also offer something called “round trip engineering” in which you can make changes to your code and import these into your models thus keeping your models in sync with your code. Although a very noble concept, I have rarely seen it fully implemented in real projects. Even if a project did implement “round trip engineering”, I would question the benefit provided for the cost of doing so.

MDA is more than code generation. It is a formalization of several concepts by the Object Management Group (OMG). The OMG is best known for its distributed object model specification called CORBA and the widely used modeling language called the Unified Modeling Language or UML.

The Lingo
At the very core of MDA is the concept of a model. A model is an abstraction of the end (or target) system. It serves as a prototype and a proof-of-concept. MDA defines two types of models. A Platform Independent Model (PIM) is one that describes the target system without any details about the specifics of the implementation platform. On the other hand, a Platform Specific Model (PSM) describes the target system on its intended platform, such as J2EE, .NET, CORBA, etc. The process of converting a PIM into a PSM is called transformation. A model (PIM or PSM) is written in a modeling language. The OMG does not restrict MDA to any particular language. However, the modeling language must be well defined, which means that it must be precise to allow interpretation by a computer. Therefore, a human language such as English is not an option (at least for the foreseeable future). An example of a good modeling language is (obviously) the UML.

Is it just Hype?
As I mentioned earlier, MDA is not a novel concept. We already talked about code generation, but database designers have been using a form of MDA for a long time. ErWin by Computer Associates is a CASE (Computer Aided Software Engineering) tool that provides MDA abilities to database designers and administrators. ErWin allows you to define your database design using a logical model. A logical database model is free from any database vendor specific details. In MDA terminology the logical model is a PIM. ErWin automates the process of converting the database agnostic logical model into a database specific (such as Oracle, SQL server, etc.) model. This database specific model is known as the physical model by database designers and as a PSM in MDA lingo. As I mentioned earlier, the process of converting a PIM into a PSM is called transformation. Finally, ErWin can be used to generate the SQL code (DDL) to create the database structure (tables, views, indexes, triggers, etc.) for the targeted database. Based on the definition of MDA and the capabilities offered by the tool, ErWin is an MDA tool. In this case the modeling language is the well defined E/R diagramming notation. So there is no doubt that MDA tools are possible. However, there are several hurdles to overcome before such tools become mainstream for general purpose software development, and especially for custom development.

Two such hurdles include:

Transformation Complexity Although designing a properly normalized database that meets the business needs and application performance demands is a non trivial task, the process of converting the database PIM into a PSM and the PSM into SQL is fairly mundane. Transforming complex class and interaction diagrams is a more involved and [possibly] artistic process with many possible alternatives, each one with its own set of pros and cons.

Language Expressiveness Once again, the simplistic E/R diagramming notation is sufficient for describing complex database diagrams mainly because the complexity is not in the diagram but rather in the design decisions and tradeoffs considered while creating the diagram. E/R notation is also universally accepted as the language used by database designers for data modeling. UML, on the other hand (even with 2.0) is very controversial in its ability to support complex software interactions and is often extended with custom stereotypes and notations by software architects and designers. Even though, MDA is not tied to UML, the reality is that UML is the lingua franca of MDA.

The Reality
In my opinion, MDA tools, even with their existing limitations, have a definite place in any architect’s toolbox. But then, everything can be taken to an extreme and the same applies to MDA, which is not without its associated hypes.

Here are the two most common ones that I encounter:

MDA brings Software Architecture to the masses Remember, MDA is a tool in an architect’s toolbox. It is not the toolbox itself and it is definitely not the architect. MDA does not eliminate the need for competent and experienced architects, designers, and coders on the team. As the saying goes “Not everyone with a hammer in their hand is a carpenter”.

MDA equals Software Architecture using pictures Is this really possible? Even in database modeling, where a level of MDA is already being used, how far does MDA take database architecture? Talk with any database designer or DBA and you will quickly realize that most of their work does not really revolve around using ErWin. The same applies to software architecture in general. It involves much more than drawing pictures. In fact, one could argue that it involves too much, which is why we are still struggling to come up with a universally accepted definition of software architecture.

So, my recommendation is to use the MDA tools for what they are… tools, and stay away from the hype. Maybe the next acronym to take root will be CDA or Command Driven Architecture (coined by yours truly). You basically tell (or command) the CDA tool that you want a robust, multi-tiered architecture for handling bank transactions and the tool creates it for you. And while it’s at it, maybe it will bake you some cookies as well. What do you think?

Sunday, March 15, 2009

Service Oriented Architecture - All that glitters is not gold

I was in a quandary about what my first blog should be ever since JavaWorld approached me with the idea of a Java Design blog. It was a few days later as I was talking with a good friend of mine (after a couple of sets of some rigorous tennis) that he happened to describe the architecture of a product that he works on at his job. Without really thinking, I blurted out “Oh, it’s a service-oriented architecture”. As it turns out, it was not, which is when the idea of writing about the term service-oriented architecture came to mind. Thanks, Kurt.

The phrase “Service-oriented Architecture” is probably by far one of the most used and abused buzzwords today. It is abused not because people don’t know what the term means, but because they are too generous in its application. Although this may seem paradoxical, as you’ll see in a moment, it’s really not so.

First, let’s define a service-oriented architecture or SOA as it’s commonly abbreviated and referred to in conversation and literature. In its simplest form, an SOA is any architecture that can, at least on a logical level, be decomposed into three categories of components: a service, a provider of the service, and a consumer of the service.

Here’s the catch: almost any software application with a basic level of object orientation can be described in such a way, even if the designer of the application had never heard of SOA! The problem with this definition is that it is too vague and does not imply any level of sophistication in the application [architecture]. So how do you know if an architecture that appears to be an SOA actually is an SOA?

Here are four litmus tests that I typically use:

  1. Does the architecture explicitly address how service consumers find service providers? This test focuses on loose coupling between service providers and consumers. Typically, this test is satisfied by an implementation of the Factory design pattern as described by the Gang of Four. One way of achieving this within the bounds of J2EE is registering service providers in a JNDI directory. A better way would be to implement the Service Locator pattern as described in Core J2EE patterns.
  2. Is each service provided by a provider explicitly bounded by an input and output contract? Once again, this test focuses on loose coupling. However, in this case, we are concerned about the coupling between the service and its provider and consumers. One way of satisfying this test within J2EE is to start each service implementation with two interface definitions: one interface encapsulates all the input parameters and the other one encapsulates the output. Web Services achieve this by using well-defined SOAP (XML) messages that specify the input and output, and by providing a well documented description of these using the Web Services Description Language (WSDL).
  3. Does the architecture explicitly address location and distribution transparency? Test #1 described above gets us part of the way there. However, this test focuses more on the quality of service (QoS) characteristics of the architecture, such as service availability, fault-tolerance, and the ability of achieving performance and load scalability through server load balancing, server farms, and distribution/deployment across multiple tiers.
  4. Are the services really just objects with another name? This test probes the architecture to see if it was actually designed as an SOA or simply labeled one for better marketing exposure. Services are not distributed objects. Objects are by definition stateful i.e. they encapsulate some state and provide methods to manipulate that state. Services, on the other hand are stateless. The input message has all the information that the service needs to perform its task and the output message has all the information the client needs back from the service. Thus, the interaction of a service consumer with a service is in the form of a single call rather than an orchestration of multiple calls as it is with a regular object.
I am almost certain that there are details about an SOA that I have missed in this blog. I would love to hear from you about your experiences with SOAs, both positive and negative, and about tests that you have used to weed out SOA imposters.