The Artima Developer Community
Sponsored Link

.NET Buzz Forum
Distributed .NET Computing Part 2

0 replies on 1 page.

Welcome Guest
  Sign In

Go back to the topic listing  Back to Topic List Click to reply to this topic  Reply to this Topic Click to search messages in this forum  Search Forum Click for a threaded view of the topic  Threaded View   
Previous Topic   Next Topic
Flat View: This topic has 0 replies on 1 page
Sam Gentile

Posts: 1605
Nickname: managedcod
Registered: Sep, 2003

Sam Gentile is a Microsoft .NET Consultant who has been working with .NET since the earliest
Distributed .NET Computing Part 2 Posted: Feb 24, 2004 5:49 PM
Reply to this message Reply

This post originated from an RSS feed registered with .NET Buzz by Sam Gentile.
Original Post: Distributed .NET Computing Part 2
Feed Title: Sam Gentile's Blog
Feed URL: http://samgentile.com/blog/Rss.aspx
Feed Description: .NET and Software Development from an experienced perspective - .NET/CLR, Rotor, Interop, MC+/C++, COM+, ES, Mac OS X, Extreme Programming and More!
Latest .NET Buzz Posts
Latest .NET Buzz Posts by Sam Gentile
Latest Posts From Sam Gentile's Blog

Advertisement

There has been a lot of great feedback to my original piece. I want to discuss these points further but in logical chunks. One of the things that I did, as I seem to sometimes do, is put several thoughts together that may actually be somewhat orthogonal, even with three months of thinking how to best to write the piece. It would make sense to now address some of those points in separate topics. Bear with me.

For the first part, Adrian Bateman wisely discusses the differences between logical tiers in an N-tier architecture and physical deployment. This is something I obviously should have more clearly stated. An N-Tier architecture is designed in logical tiers and scalability is achieved with short get-in and get out type of operations rather than persistent connections to the database, as well as separating presentation and business from data. None of this necessarily requires multiple hardware layers. Neither does it come to my conclusion of developers going back to client/server systems. It's simply a way of designing for scalability and a grouping of components. You can certainly have a system that scales with two or more logical tiers on one physical box.

At this point, there are two threads of thought I am parsing; one is that I still don't see much literature, books, samples, examples that discuss this kind N-Tier .NET architecture although PAG and ServerSide.NET are starting to change this. I still contend that most .NET resources don't even consider this (and I don't think they belong or exist in Remoting books), and I want them to as well as encouraging .NET developers to learn as much as they can, but that's an orthogonal topic for another day. The second thread is of physical deployment. I contended that deploying the middle tier on separate boxes usually gives far better performance relative to hardware costs which Adrian challenges. I don't disagree with his statements that performance wise, this is mostly with highly transactional systems and that with smaller scale systems, communication costs between layers adds latency. Which leads us to that there is no universal answer for architecture.

Architecture is a complicated subject. Architecture has to do with many things such as knowing first and foremost the customer and business needs as well as history, current and possible technology. But the most important thing is the business needs. Every architecture is completely different and there is no one set “correct“ architecture or even one way to do one. It all depends on the business needs and the technology under consideration. Sure, we have some common patterns as described in Fowler's most excellent book and we have some best practices but thats it.

So given that, what are some reasons for deploying one's middle tier on a separate box? Well, one very good reason, and one that we addressed in our design, is security. It's all about minimizing the threat target. Deploying the presentation and business logical tiers on the same physical box, at least with ASP.NET, can expose such an application to various security threats. Robert already said best what we thought through, “What happens when the web server is compromised, and your database credentials are sitting there open for anyone to look at?  What happens when the web server is compromised, and someone looks in the registry at the DSN settings to see where that database is located, and how to access it?“ With physical separation, we don't have connection strings stored or registry settings for DSN, in fact no way to get to the database except through the middle tier on another physical machine. Robert further elaborates on some of the things we are thinking there.

Is this a universal answer? Of course not. Not every architecture requires this type of arrangement although giving more thought to security is always a good thing. Again, it all depends on the business needs of your particular application. That also applies to the use of ES/COM+ or even the use of any distributed technologies altogether. Some people interpreted my piece literally that either all distributed .NET apps should use ES or that all .NET apps require tiers and scalable design built in from day 1. Nothing could be further from the truth. To take the second point, obviously, there are huge classes of .NET applications that have nothing to do with distributed communication whatsoever and therefore are not what I am talking about. For those that do apply, ES is not a universal answer either. Web Services is one answer, Remoting is another, ES is yet another again. Message Queuing is yet another. Business needs, business needs. Well, one of those business needs is distributed transactions. I, personally, am not so interested in the other features of ES/COM+ like QueuedComponents, but where I tend to see a big case for ES is in performing distributed transactions against multiple RMs like Oracle and SQL Server in the same logical transaction or even two or more databases that must be accessed in the context of one logical and atomic transaction. Again, here it's not a universal answer. As Adrian points out, it's not so clear cut as it was with COM+/Windows DNA as .NET and Managed Providers give tighter access to the database.

There are many other things left to say about other pieces of the original piece and other feedback but I'd like this much to be the next step. Hopefully, this clarifies some things and continues the discussion rather than confuses. I'm sure you will let me know-)

Read: Distributed .NET Computing Part 2

Topic: Longhorn "Verification Mode" Previous Topic   Next Topic Topic: Extreme Programming Adventures in C#

Sponsored Links



Google
  Web Artima.com   

Copyright © 1996-2019 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use