Software Architecture
Software Architecture
1.1 Introduction
To define a set of components and the way these components are connected so as to give
a clear picture of the overall system.
A software architecture apart from the identification of elements and their relationships, should
comprise the following aspects
As the elements to be defined are extensive depending upon the domain, the architectural
structure can be broadly divided into three groups as below:
1. Module Structure
The elements are modules which are individual unit of implementation. This
implies a code-based way of representation as we talk about functions.The modules focus on the
functional responsibility of the system. The modules will interact with other modules , their level
of interaction, exchange of information, the relationship of these modules whether it is
generalization or specialization.
Module1 Module2
Module3
Module -functions
- Code-based representations
The connector are the communication between the components and components
are the principal units of computation. This structure focuses on the change in the flow of data
during system execution, the data stores that are shared, replication of some elements, feasibility
of parallel processing and the interaction of the executable components.
3. Allocation Structures
These structures shows the relationship between the software elements and the
elements in the executable environment. The processing ,storage and maintaining these elements
during development,testing and system building are the focus of this structure.
The focus is on the software elements relationship in an external environment in
which they are created and executed. The external environment may include
- the processor in which they are executed
- the file system for storing software elements during development
- networking elements for interactions etc……
Architectural Design Decisions involve the above three structures. Therefore you need to
be careful enough in choosing your Design Structures as they may have an impact on the
Design Decisions to be made during the development phase.
The broad type of decision based upon the these structures include
How will you be structuring the system
- with a set of code units as modules
- a set of elements with runtime behavior as components and their interaction
as connections
- relating non-software structures in an environment
Module
Module-based structure includes the following:
Decomposition
Purpose :
Elaboration :
- represent a common starting point of design
- Each unit can be expanded for a detailed design and their implementation
- Association with other external products can be determined.
Advantage :
- provides system modification quite easier
- modification can be restricted to few individual modules if there is a planned
decomposition
Uses.
Purpose :
- the decomposed modules as units need to be connected so as to get a clear
picture of the overall system
- units related by ‘uses’ relationship
Elaboration:
- The correctness of one unit is required by the next unit
- the system can be connected to other systems for further extensions .
Advantage :
- Functionality can be either easily extended or extracted
- leads to incremental development
Layered :
Purpose.
- when units are designed in a standard manner , a system of layer emerges
- layers are abstractions
- a coherent set of related functionality would be of great use when the system is extended for
individual units
Elaboration
- In a strictly layered architecture, for a given n layers, layer n may only use the service of
layer n-1.
Advantage :
- improves portability
Class or Generalization
Purpose :
- units are called classes
- similar behaviour are collected into classes
- relation is ‘inherits-from’ or ‘ is an instance of ’
Elaboration
- classes can be inherited from other classes
- involves all the object-oriented features
Advantage.
- Re-usability
- incremental addition of functionality
Component-and-Connector
These structures include the following
Purpose :
- the units are processes or threads
- these units are connected by communication,synchronization ,and/or exclusion
operations
Elaboration:
- deals with dynamic aspects of a running system
- orthogonal to the module-based structure
Advantage.
- To engineer system’s execution performance and availability
Concurrency
Purpose.
- units are ‘logical threads’
- a logical thread is a sequence of computation which can be allocated to a separate
‘physical’ thread during design process
Elaboration
Advantage:
- used in the early design process to understand the concurrency structure and
-identify requirements for managing issues associated with concurrent execution
Purpose
- components and connectors that create ,store, and access persistent data
- when the system is structured around one or more repositories
Elaboration
- This structure helps to understand how run time software elements produce and
consume data from the shared repositories
Advantage
Client-server
Purpose
- For systems built as a group of cooperating clients and servers
- components are clients and servers
-connectors are protocols and the messages they share among the components
Elaboration
- The client-server structure expands as the co-operating elements.
Advantage
- Separation of Concern
- Physical Distribution
- Load Balancing (supporting run time performance)
Allocation
Deployment :
These relations become complex when the allocation is dynamic , for example in a
distributed system,
The migration of the software element from one physical unit to another may have
an impact on the
- performance
-data integrity
-availability of other resources
-maintenance
-security
Implementation
Work assignment
Advantage
-work structure assignment important for assigning responsibilities.
- assigning common functions to a single team
- to provide clear-cut architectural and management decisions
- Allocation of expertise by the architect will ensure re-usability and improve
productivity
The following table provides the overall view of the elements and their relations
with reference to how this can be used for the system you are developing .
The system’s structure may be with respect to the functionalities but there are other views that need to be
represented from the different architectural structures that has been discussed above.Different view of the system
is provided by the different structures will provide the architect a clear understanding of the system.
Again the influence of architecture takes different perspective as the system under construction involves many
factors. Remember the system under construction shapes into the final product.
1.6.1 Influence of Stakeholders
For any system under development undergoes many changes depending upon the expectations of the
stakeholders. Stakeholders are any individual or team who are directly or indirectly involved in the system under
construction.
Example : University Management System
Stakeholders : Customer ( University Administrators )
End user ( Management staff ,Teaching staff,student,……..)
Developers
Project Manager
Maintenance Team
Finance Team (Budget)
Marketing Team
Concerns :
Goal :
To guarantee the performance of the system and optimize it
To easily customize the system
To achieve short time marketing
To employ programmers with particular functionality
To provide a broad range of functions
To provide maintenance with low cost
These are the major milestones for addressing the concern of the stakeholders.
The architecture is influenced by the nature of the organization. This depends upon the resources
available and the limitations within which these resources can be deployed. For example the skill set of the
programmers ( human resources) decides what architecture will be supported by the management.
Immediate business
Long-term business
Organizational structures
Immediate Business
An immediate business indicates that the time period is short. So reusability is a major factor.
Existing architectures, the products on which the current system can be built plays a crucial role.The
proposed system may be in the pipeline to be released and therefore re-using existing structures
will speed up the process with minimum cost.
Long-term Business
A long-term business indicates that there will be a major development in the infrastructure which
involves major financing requirements and an extended time period. The organization may wish to follow strategic
goals for a complex system development.
Organizational Structures
The software architecture can be shaped by the organizational structures.the strength of the
organization will have an influence on the way the architecture can be defined. The other subsystems may be
subcontracted as per the expertise and proper workflow assignment need to be followed.
The current or emerging environment will influence the architecture. This is a special case of the influence with
reference to the architect and their experience in multiple system design. If a previous architecture was successful
in terms of the end product then that will be a possible choice of influence. The flexibility of the environment will
also influence the architectural decision.
The architect need to understand the nature,source and priority of constraints on the project. Therefore active
engagement of stakeholders is very important for a fruitful design of the system. The influence of the architect and
the architecture is shown in Fig.1.1
Architect’s Influence
Stakeholders
Developing
Organizations
Architects are influenced by the requirements of the stakeholders, the structure , goals, available technical
environment of the organization and their own prior experience.
A business manages the feedback cycle and thereby sustains growth and development. This is
shown in Fig 1.2.
Fig 1.2 Architecture Business Cycle
Architecture is an important part of the design process and includes large complex system whose
structures cannot be designed at one stretch.ABC is built on the basis of the influences to and
from the architecture.
Requirement specifications are very crucial for the successful implementation of the system.
Requirements
Functional Non-Functional
Functional Requirements
Any system is bound to provide some functionalities. The system should react to the inputs and how the system will
behave for these inputs should be specified in the functional requirements.
Requirements depend on
- the type of software ( System software, Application software,Engineering software, Embedded
software …)
- expected users of the software
- general approach of the organization
Functional requirements need to be started from the general requirements on the whole and
decomposed into specific requirements.
1. A user shall be able to search attendance details for a certain time period
2. The system shall generate each day, list of employees who are present for that day
3. Each staff member using the system shall be identified by their employee ID (unique)
4. A cumulative report of the attendance will be generated on daily basis
-------
The user requirements in turn shapes into the system requirements for further elaboration of the system. If the
requirements are not clear will lead to an imperfect system development. This will to lead to a delay in system
development which will in turn increase the production cost. The requirements should be clear as ambiguity may lead
to misrepresentation of the system.
For example , consider Requirement No.3 , the identification of the user is through their employee IDs.
The other sub queries that may be raised can be
-The employee ID should be unique or not?
- Number of digits of the ID?
- Are there different levels of Employee?
-What should be their privileges? etc..
The functional requirements of the system should possess the following characteristics
- Complete Specification of the system
All the services (functions) of the system need to be defined.
- Consistency requirements of the system
The same flow of the system should be maintained until the concluding phase
otherwise leads to contradictions.
But this is very difficult to adhere when put in practice for a large complex system as consistency and
completeness is a complex task to be achieved. But the ultimate aim is to attain such systems. Another
challenge is that each stakeholder will expect that their requirements are met . And for a large complex
system, there may be many stake holders which makes the process a challenging. Frequently
stakeholders will expect inconsistent requirements. The inconsistency may not be realized at the initial
phase but this may be realized at a later stage. This realization at a later stage may not help as the
damage would have been already done. But the recovery of such a damage will cost the organization.
That is why a precise requirements specifications involving all the stakeholders is important for the
successful development of the system.
After incorporating the requirements , the next phase is the design phase. The goal of design is to produce
a model or representation that exhibits firmness,commodity, and delight ( Pressman) . To formulate a
system or product, the requirements of the stakeholders, business needs and technical considerations need
to be involved.
The requirements phase focuses on
the different perspectives of data,function and behavior.
The design model provides detail about
-Software architecture
-Data structures
-Interfaces
-Components
For making critical design decisions, we rely on constraints. They pose some valid restrictions on the design
decisions. Constraints can be considered as the requirement which cannot be deviated by any external factors. The
constraints of the software architecture Design consists of
- Technical constraints
- Business constraints
Therefore the design constraints may limit the completeness of the system and hencforth should be developed
with utmost care.
For example, you cannot develop a system with a constraint that this application will be executed only in devices
with windows version xxx.
Persistent storage
- database,cloud environment
Integration of all these components may pose some other new challenges.
As the system need to be executed in various devices manufactured by different vendors at different
prices, the UI limitations cannot be predicted. But a cumulative decision has to be taken so that the system runs
under diverse constraints.
For example , in mobile applications, the interactions may be through
- keys
- touch
-stick
-pen
- virtual keys
- voice
When voice is taken into consideration, the complexity takes different dimensions in terms of the non-standard
speech recognition which may pose a severe constraint on the interactions.
The quality attributes of any system should be considered throughout the initial to the final phase of the system
development. But less taken into consideration as our initial focus is always on the successful deployment of the
system. But in order to maintain the lifetime of the system, to minimize technical flaws which arises in due course
of the system support, the system should be of high quality.
A software with quality attributes (non-functional requirements) ensured leads to high quality software.
Maintainability - A changing business environment requires change int he software and it should meet the
changing needs
Dependability and Security - Only reliable software can be dependable and will ensure safety of the system
Efficiency - This includes responsiveness ,processing time,memory utilization etc..
Acceptability - The system should be understandable,easy to learn and compatible across many devices. This
ensures the usability of the system.
Product characteristics
Usability involves both architectural and non-architectural aspects
Non-architectural aspects include making the user interface clear and easy to use
Example : Should you provide a radio button or checkbox?
What typeface should be clear?
Whether you have provisions to undo the operations
These minute details influence the usability
2. Business Qualities
3. Other Qualities
Source of Stimulus : An entity ( human interaction, in a web application , the user triggers the hyperlink, redirected
to another system/server )
Environment : The environment in which the stimulus occurs. For example, when the system is in idle state, in
progress, in overloaded state.
Response : The response (output) is the activity undertaken after the trigger of the stimulus
Response Measure : In order to test the requirement , the response should be measured. For example , how much
seconds the page is popped after clicking the hyper link?
The quality attribute scenario can be transformed from a general scenario to a specific scenario. The
availability specific scenario is illustrated as below:
Artifact
Scenario 1 : When there is an unprecedented situation ( system crash), after how many seconds, the system will
be available for reuse?
Artifact
Process
Source: Stimulus Response Response Measure :
internal/external CrashUnavailableRepair/Time/Availability
Environment:
Normal
Scenario2 : When there is an unprecedented situation ( unstable networking condition), after how many seconds,
the system will resume?
Artifact
Storage
Source: Stimulus Response Response Measure :
Scenario 3 : When there is an unprecedented situation ( file missing), what notification to be made?
Artifact
Process
Source: Stimulus Response Response Measure :
The quality attribute requirements should be elicited , developed,recorded and implemented for
successful deployment of the software. Otherwise the consequence may be highly serious.The
quality-attribute-specific tables are created which will be used for the general and in turn by specific scenarios.
With such reference table the quality attribute scenario can be generated.
Summary
This chapter defined the fundamental of software architecture. The need of software architecture and their
importance during the software engineering process has been emphasized . The system under consideration
should be both viewed from the business as well the technical perspective . Avoiding these perception will lead to
several other flaws in the system.
Questions
1. What is Architecture of a software based on?
a) Design
b) Requirements
c) All of the mentioned
d) None of the mentioned
5.What are the categories in which quality attributes are divided in?
a) Development Attributes
b) Operational Attributes
c) Functional Attributes
d) Development & Operational Attributes
a) Design
b) Integrability
c) Maintenance
d) None of the mentioned above
Quality Attribute Workshop (QAW) focuses on the core system and their stakeholders. The QAW intention is to
gather the requirements from all the stakeholders involved. The first thought is that this may be during the
requirement analysis phase of the Software Engineering cycle itself. But why do you need such workshops? The
first view is that the requirements may be split into both functional and non-functional requirements. The
functional requirements are the core functionalities of the software under development. The non-functional
requirements facilitates the quality required for the software. Therefore the aim is to include the quality attributes
also during the architecture design rather than taking into account after the completion of the core functionalities
of the system.
Although any architecture may not be able to assure that the final implementation will include quality attributes,
but keeping the quality attribute as a required constraint may help during the implementation process. For
example : Security is a quality attribute but it may not be part of the actual system . The design of the
components , their relationship with other components and the secure integration of these components with
other subsystems should be developed in the early life cycle of the system.
QAW is a way to
- discover
- document
- and prioritize
quality attributes in the early life cycle phase. On the whole the functional and non-functional requirements
of the system need to considered during the initial phase in order to ensure high quality software.
The quality attributes of a system are as important as the functional requirements. If the goal is to develop the
right system which provides the correct outcome for all the functionalities then we don’t have a major challenge in
developing such systems. But for example consider the following requirements
- Will the system provide the same feel across different devices manufactured by different vendors?
- Does modification any part of the system will severely affect the other subsystems interrelated?
- How can portability be assured for a smooth transition of the software from one environment to
another environment?
These attributes cannot be taken for granted and cannot be considered at the terminating phase as this may
impact the overall production cost of the system.
The major issue while defining the quality attributes is that they are not definitive.
Quality
Abstract Concrete
Example
Scenario 1: UI OS
User Interface 1
If any modification to another User Interface , is it adaptable? Now this constraint is abstract. Depending upon
the domain, depending upon the UI elements and how this will be realized in a different version of the same .
Scenario 2 : System
OS v1 NNNextGen
OS v1.1 UI exttGenUI
The system which is modifiable as per the NextGen UI elements , whether adaptable to the next OS versions.
Hence this depends upon the system and hence definite constraints so that the abstract constraints should be
shaped into concrete definition.
Decomposing the system into will also affect the functionalities are implemented along with the quality attribute.
Achieving quality attribute will have impact on other attribute [Boehm 78]. Considering one attribute will have
an effect on the other attributes.
Example :
As the Cyber crime rates are increasing exponentially, security is a key quality attribute to be taken into
consideration. But additional security mechanisms leads to increased processing time which may degrade the
performance of a low-end device.
This is the trade-off which we need to explore so as to solve it flexibly without compromising the security
features of the system. Thus architectural decisions will be made with reference to the trade-off decisions.
Earlier trade-off decisions ------> Include quality attributes --------> Architectural Decisions
(Identification & Prioritization)
Critical design decisions and that too including quality attributes pose several other challenges which will be
addressed in the QAW:
• How can you discover, characterize, and prioritize the key quality attributes before the
system is built?
- A facilitator to address the overall objective, observe the concerns and record/document hte
information dissemination
the motivation is elaborated by the facilitator
An introduction to all the methods and the consecutive flow of information is initiated
2. Business/Mission Presentation
5. Scenario Brainstorming
Scenario generation is the key step in QAW . A rigorous brain storming is required for scenario
generation .
Guidelines for Facilitator
to shape requirement into quality attribute based requirement so as to generate well formed
scenarios
to define to which property of the system , the quality attribute belongs to
to remember the three general types of scenario
- use case scenario which you would have studied in the software engineering course
- growth scenario whenever there is a change and how it affects the properties of the
system elements
- exploratory scenario when there is an unanticipated(exceptions) stress , its impact
on the system’s elements.
6. Scenario Consolidation
- Consolidation of similar scenarios , if any
7. Scenario Prioritization
8. Scenario Refinement
The QAW provides increased stake holder communication , a well defined architectural decisions,
architectural documentation and proper support for maintenance and testing for the lifetime of the system.
[QAW]
A typical set of Software quality attributes as defined by Boehm et al (1978) are shown in Fig.
The architecture documentation can be used to evaluate the system’s quality attributes. Each
attribute requires different set of information to analyze and meet the requirements.
Designers of other systems with which this one must - set of operations provided and required
interoperate
-the protocols for their operation
Quality attribute
-tracking progress
Quality assurance To provide a basis for conformance checking for assurance that
implementations have been faithful to the architectural
team prescriptions
2.2.2 Architecture Documentation and Quality attributes
The main aim of architecture documentation is to serve as the basis of analysis of each attributes whereas
these attributes do not show upon as individual entity. They are part of the system which can be realized during
the analysis of these systems. There are 5 major ways through which the quality attributes can be observed and
analyzed.
In the documentation, explanation for the choice of approach should be included, the purpose of the quality
attribute requirement and their tradeoffs will be detailed. This is termed as rationale approach.
Thus documenting the Quality attributes explicitly in the early stage of development process will
provide as a basis for a common consensus among the stakeholders. Proper assessment and
documenting these specifications will help until the final development stage and thereby
evaluating the overall system functional and quality attribute requirements as well.
The scenarios are classified into six parts which has been listed with examples in the previous sections. We have
also discussed that there are two types of scenarios. One is a general scenario that paves way for the specific
scenario. In the previous section the scenarios are generated , prioritised and finalized. The general scenarios
provide a framework for the scenario generation. The general scenarios thus generated may not be specific but
should be made specific to the system and in turn should also be shaped into quality-attribute-specific scenarios.
Making general scenario to specific means translating from abstract to concrete terms.
General Scenario :
“A request arrives for a change in functionality, and the change must be made at a particular time within the
development process within a specified period”
There can be multiple system specific versions
Specific Scenario :
Version 1 : ‘A request arrives to add support for a new browser to a Web-based system and the change must be
made within two weeks’
An extension of this can be
A new browser may require a different media type
The web based system should be adaptive so as to integrate with the new browser
We will each of the six common quality attributes in the following sections. The goal is to identify attribute and
generate general scenarios for that attribute.
AVAILABILITY
The term ‘availability’ infers that the system is available at your service. But when this is a cause of concern ?
When there is a system failure , will the system will be still available?
A system failure means the system is no longer at your service. The following questions will make you to
understand the importance of such quality.
- Whether the system is consistently not available for service?
- Whether the system failure is frequently occurring?
- How much time it takes to detect the failure?
- Whether it is easily observed by users/other systems
- After failure, when the system will be available for service or resume back?
- Hoe failure can be prevented
- Whether the failure will affect the core components or will it switch to safe mode?
- What kind of notifications should/will be given when there is a system failure?
- Will there be any warnings prior to the system failure?
- Whether these warnings will help the user to prevent the failure?
- If so up to what extent?
A system fault when not corrected becomes a failure. A fault may be a miscalculation of a specific computation
which leads to a system fault.
Example : A temperature sensor in a furnace which is programmed to check the threshold which should not
exceed the limit and if there is a miscalculation , then this may lead to a disaster.
The next issue when there is a failure, the resuming back of the system to normalcy is the response time. The
system failure is human observable. The repair time is the mean time when the user is not able to observe the
failure. Therefore the
Response Measure Time interval (quantification ) when the system must be available
Availability time
Time interval in which system can be in degraded
mode
Repair time
Thus the availability of the system is the probability that the system will be in operational mode as whenever it is
required.
MODIFIABILITY
Change is inevitable for any system as there may be emerging requirements to be fulfilled at periodic time
intervals. This may be due to several factors which may influence the operational definitions of the systems. Any
business organization will be expecting these changes and foreseeing of such changes should be defined during the
QA requirement phase itself . The modifiability scenarios can be again starts with a general scenario and moves
towards the specific scenario generation.
Modifiability General Scenarios
Let us consider a part of the modifications and how the scenario creation will be understood and debated by the
stakeholders as described in the previous section.
Scenario : "A developer wishes to change the user interface. This change will be made to the code at
design time, it will take less than three hours to make and test the change, and no side-effect changes will occur in
the behavior."
the developer,
a system administrator,
Clearly, there should be provision made for changes made by the above source of stimulus but without affecting
the core operations of the system.
A change can be
Addition/Deletion/Modification of a existing function,
A change in terms of the qualities of the system—
Variation is a concept associated with software product lines . The level at which the variation should occur will
have an impact on the response measure . This is termed as the variation factor.
Artifact. This part specifies
what is to be changed—
the functionality Inof aour example - User
system,
Interface
its platform,
its user interface,
its environment,
In our example -
design time
when (time) the change can be made—
design time
compile time,
build time,
initiation time,
or runtime. In our example, the modification is to occur at design time.
Response. Whenever the change is made , the people/system who are responsible for the change should
In our example -
modification with no side
effects
In our example -
approximate number of
days
But unpredictable and so we move towards less ideal measures such as the number of modules affected ,
number of developers required etc...
Table 4.2 presents the possible values for each portion of a modifiability scenario.
Sl.No Six-Part Scenario Possible Values
1 Source End user, developer, system administrator
3 Artifact System user interface, platform, environment; system that interoperates with
target system
PERFORMANCE
Performance is about timing. A request is triggered and how many seconds the system will respond back. The
request is any event that is initiated.
Events (interrupts, messages, requests from users, or the passage of time) occur, and the system must respond to
them. There are a variety of characterizations of event arrival and the response but basically performance is
concerned with
‘how long it takes the system to respond when an event occurs’
The performance of the system becomes complicated when there are multiple events and the arrival pattern is
also different.
For example , consider a Railway reservation system , seasonal booking by multiple users from different sources.
Events can be
User requests from other systems,
User requests from within the system.
A Web-based financial services system gets events from its users (possibly
numbering in the tens or hundreds of thousands). An engine control system gets its requests from the passage of
time and must control both the firing of the ignition when a cylinder is in the correct position and the mixture of
the fuel to maximize power and minimize pollution.
For the Web-based financial system, the response might be the number of transactions that can be processed in a
minute. For the engine control system, the response might be the variation in the firing time. In each case, the
pattern of events arriving and the pattern of responses can be characterized, and this characterization forms the
language with which to construct general performance scenarios.
Events can be characterized into
- Periodic (time interval)
- Stochastic (probability distribution)
- Sporadic (neither periodic nor stochastic)
When you say multiple events , the loading factor of the system need to be considered.
The response of the system to a stimulus can be characterized by
latency (the time between the arrival of the stimulus and the system's response to it),
deadlines in processing (in the engine controller, the fuel should ignite when the cylinder is in a
particular position
throughput of the system (e.g., the number of transactions the system can process in a second), the jitter
of the response (the variation in latency)
number of events not processed because the system was too busy to respond
data that was lost because the system was too busy.
Performance General scenario generation
Parts of scenarios possible values
Source One of a number of independent sources, possibly
from within system
Stimulus arrival of events (periodic/stochastic/sporadic
Artifact System
Environment normal/overload mode
Responses processes stimuli;
changes level of service as per mode
Response Measure Latency, deadline, throughput, jitter, miss rate, data
loss
The performance factor is also major indicator of the lifetime of the software .
SECURITY
Security is a measure of the system's ability to resist unauthorized usage while still providing its services to
legitimate users. An attempt to breach security is called an attack[1] .This is not the only form as security can
take a number of forms in the current digital . Further reading may be required to get a deep understanding of
various forms.
It may be an unauthorized attempt
to access data or services
to modify data
to deny services to legitimate users.
Nonrepudiation
Defintion : The property that a transaction (access to or modification of data or services) cannot be denied by
any of the parties involved in .
Example : You cannot deny that you ordered that item over the Internet if, in fact, you did.
Confidentiality
Definition : The property that data or services are protected from unauthorized access.
Example : A hacker cannot /should not access your personal details from a passport office.
Integrity
Definition : The property that data or services are being delivered as intended.
Example : your semester result has not been changed after posted online
Assurance
Definition : The property that the parties to a transaction are authorized.
Example : when a customer sends a credit card number to an Internet merchant, the merchant
is who the customer thinks they are.
Availability
Definition : The property that the system will be available for legitimate use.
Example : A denial-of-service attack won't prevent your ordering this book.
Auditing
Definition : The property that the system tracks activities within it at levels sufficient to
reconstruct them.
Example : If you transfer money from one account to another account, in Switzerland, the system
will maintain a record of that transfer.
The architecture specification of Java 2 Enterprise Edition (J2EE) and their quality attributes are described in this
section. J2EE ( earlier with Sun Micro Systems , now acquired by Oracle Corporation) is an enterprise specific
distributed object-oriented model. This is designed and developed to integrate various java components. Any
business model is not confined to a single Java component.
Enterprise Java Bean (EJB) is an important part of J2EE. EJB is a server-side component-based programming
model.when you speck about enterprise services , you have a wide collection of components to be integrated. The
service may be
Naming services
Life cycle of components to maintain the state
Persistence
Finally the support provided by the vendors for various application model conforming to the standard is
required.
Any distributed systems require the support of a standard infrastructure. .NET (Microsoft ) architecture
specification provides similar services for building distributed systems exclusively for Windows-based platforms. .
StakeholdersRequirements Architecture
Software Community (Qualities) J2EE/EJB
Developing Organization Portability
Software Vendors Transparency
Technical Environment Evolvability Architect (s)
SH OO Paradigm Interoperability Sun
Distributed Computing Extensibility Micro systems/
Oracle System
Java Programming Language
Architect’s Experience
Varied
The goals of the the J2EE/EJB architecture should be reflected in the qualities of the architecture.
2 Availability/ Reliability System should provide 24/7 availability with very small downtime
periods
EJB architecture
Addresses the development, deployment, and runtime aspects of an enterprise application's life cycle
Defines the contracts that enable tools from multiple vendors to develop and deploy components that
can interoperate at runtime
Interoperates with other Java APIs
Provides interoperability between enterprise beans and non-Java applications
Interoperates with CORBA ( Common Object Request Broker Architecture)
Architectural Solutions
Client tier.
In a Web application, the client tier comprises of client-side components invoked by the
internet browser or as independent Java clients.
Web tier.
The Web tier runs a Web server to handle client requests and responds to these
requests by invoking J2EE servlets or JavaServer Pages (JSPs).
EJB -Separation specifies the behavior they require from the container at runtime
rely on the container to provide the services.
Separates the the business logic
The information which has to be fetched and processed are available in this tier .This typically
consists of
one or more databases
back-end applications like mainframes
legacy systems, which EJBs must query to process requests.
JDBC drivers are typically used for databases- Relational Database Management
Systems (RDBMS).
This section focuses on how the EJB architecture provides a standard programming model for
constructing distributed object-oriented server-side Java applications. A Bean is a reusable
software component which includes properties,methods and definitions. Bean also provides a
standard definition which less clutters the application programmer’s work.
The EJB programmer's job is to bundle these packages with any application-specific
functionality to create a complete application. These Beans also are defined based on the
standard design patterns. The aim of Java programming is
Java Virtual Machine (JVM) allows a Java application to run on any operating system. The server
components require additional services that are not part of JVM . These services should also be
extensible and adaptable . Services such as security, storage,transaction etc.. can also be
provided through other vendors as independent services. For these services you need
additional infrastructure which is also made available to the application.
Step 1 :
invokes
Client application Server Component
Step 2 :
Container
Component1
Client application Instance of Component2
Component3
Component
--------------
The EJB component model defines the basic architecture of an EJB component, specifying the
structure of its interfaces and the mechanisms by which it interacts with its container and other
components. The model also provides guidelines for developing components that can work
together to form a larger application.
The two main types of components: session beans and entity beans.
Session beans :
contain business logic and provide services for clients. The two types of session bean
are known as stateless and stateful.
- A stateless session bean is defined as not being conversational with respect to its calling
process. It does not maintain any state information of the client side.
- A stateful session bean is said to be conversational with respect to its calling process and
therefore can maintain state information about the conversation. Once a client gets a reference
to a stateful session bean, all subsequent calls to the bean using this reference are guaranteed
to go to the same bean instance.
EJB containers assume responsibility for managing the life cycle of stateful session beans.
Entity beans
They are typically used for representing business data objects. The data
members in an entity bean map directly to some data items stored in an associated database.
Entity beans are usually accessed by a session bean that provides business-level client services.
As the Entity Bean is involved in managing business data objects, persistence data management
is crucial. Persistence refers to the way in which a bean's data (usually a row in a relational
database table) is read and written.
Table 16.4 summarizes how the EJB architecture supports Sun's key quality attribute
requirements for the overall J2EE architecture. An example deployment view of the J2EE/EJB
architecture is illustrated in Figure 16.4.
The Java client invokes the appropriate Remote method Invocation (RMI) classes through Java
Naming and Directory Interface (JNDI) . The components are identified through JNDI and create
instance of the component fro the EJB container. The data to be stored and retrieved are
performed through the connection drivers.
EJB PROGRAMMING
An EJB depends on its container for all external information. If an EJB needs to access a JDBC
connection or another bean, it uses container services. Accessing the identity of its caller,
obtaining a reference to itself, and accessing properties are all accomplished through container
services. This is an example of an "intermediary" tactic.
The Bean interacts with its container through one of three mechanisms:
callback methods
EJBContext interface
Java Naming and Directory Interface (JNDI).
To create an EJB server-side component, the developer must provide two interfaces that define
a bean's business methods and the actual bean implementation class. Clients use them to
access a bean inside an EJB container. They expose the capabilities of the bean and provide all
the methods needed to create the bean and update, interact with or delete it.
EJB server
Provides the services required by the EJB component
EJB client
Provides the user-interface logic on a client machine
Component’s remote interface defines the business methods that can be called by
the client
The client calls the home interface methods to create and destroy proxies for the
remote interface.
EJB container
The environment in which one or more EJB components execute.
EJB Supports J2EE Quality Attribute Requirements
systems
Usability Graphical components Rendering separate user interface compliant with the
and interfacing vendor-specific components
Summary
Quality attributes are crucial for the successful lifetime of the software. As quality attributes are
considered to be the last priority of any system construction, its necessary to have a quality
attribute workshop to understand the importance of the qualities of any system under
development.the quality attribute scenario is a requirement definition specific to quality
attribute requirement. They also constitute the Six-part Scenario such as the Source of
stimulus,Stimulus, Environment, Artifact, Response and Response measure.Based upon this the
scenario generation is made and the analysis of such scenarios are performed. Further a case
study relevant to the creation and generation of quality attribute scenario is elaborated in a
detailed manner.
Questions
9. The quality attributes can be calculated under which of the following measures?
a) Observable
b) Non observable
c) All of the mentioned
d) None of the mentioned
APPENDIX
QAW ROLE and TEMPLATE
#1
#2
#3
#4
#5
......
6. Scenario Consolidation:
<Cut and paste the Raw Scenario Table above. Merge similar and duplicate scenarios
using stakeholders’ input. Cut and paste merged scenarios. Also merge cells in
the Scenario# column as necessary.>
7. Scenario Prioritization:
<Prioritize scenarios. Each stakeholder gets votes equal to 30% of the total number
of scenarios generated. Add a column titled “Votes.”>
8. Scenario Refinement:
<Fully develop the scenario to include details such as how long, how much, how often,
when,environment, who, and so forth.>
Scenario(s):
Business Goals:
Relevant Quality
Attributes:
Stimulus:
Stimulus Source:
Environment:
Response:
Response Measure:
Questions:
Issues:
Table 3: Example Scenario Refinement Table
Scenario(s): When a garage door opener senses an object in the door’s path,
The architectural views make us to understand the various perspectives of the system. Views
are necessary because they give an initial understanding of the overall system. This is very crucial for the
success rate of the product ( system under development). How can be the architectural view
represented? You know how to define the overall system architecture, each chapters makes you to get a
better understanding of the system apart from the core components that has to be designed for any
system.
A system is not a single component which can be designed through a specific view. Each component is
highly different in nature and henceforth their functionalities also differ. Also the way in which the
stakeholders will also vary. The requirements perspective of the stakeholders will also have an impact
on the architectural view. As the system becomes increasingly complex , it is impossible to provide an
architectural description from different perspective in a single view. Therefore , two major issues to be
considered for architectural view are
- Views or perspectives required for designing and documenting system architecture
- notations for describing architectural models
“ For design and documentation , you need to present multiple views of the software architecture “
Krutchen (1995), in his well-known 4+1 view model of software architecture, suggests that there
should be four fundamental architectural views, which are related using use cases or scenarios
[somerville ].
1. A logical view,
2. A process view,
3. A development view,
shows the split of the software into components that are implemented by a single developer or
development team.
Usefulness : for software managers and programmers.
4. A physical view,
shows the system hardware and how software components are distributed across the processors
in the system.
Usefulness : for systems engineers planning a system deployment.
In addition to this, there are several other views which may be used in due course as per the
requirements of the system. The views which we are discussing is of a broader perspective taking into
considered various aspects of the system. However, you need to apply your logical thinking to assess
and analyze what sort of view should be provided to the system what you are developing. In
forthcoming sections we will analyse the standard definition of view , the other available views and their
usability as available in the literature.
A viewpoint defines the perspective from which a view is taken. Therefore a view of a system is the
representation of the system from the direction of the viewpoint. You observe the system from a
particular view point. You observe the system from the perspective of a stakeholder.
Logical view
- functions
-their organization
Process view
- modules and interactions
- non-functional requirements
Development view
- front end
- back end
- connectivity
Implementation view
- hardware/software components
- technology
etc..
The following figure presents you a clear picture of the system architecture which comprises of
multiple views. This in turn involves multiple view points.
View point
Views
System Logical
architecture
…… Process
Development
implementation
The architectural structures have been discussed in chapter 2 . Now we will explore the coherence
of structure and view in the next section.
We will be using the related terms structure and view when discussing architecture representation.
■ A view is a representation of a coherent set of architectural elements, as written by and read by
system stakeholders. It consists of a representation of a set of elements and the relations among them.
■ A structure is the set of elements itself, as they exist in software or hardware. In short, a view is a
representation of a structure.
For example,
A module structure is the set of the system’s modules and their organization.
A module view is the representation of that structure and you have a standard template based
upon which the document is created. This provides ease of convenience to the stakeholders.
Architectural structures can by and large be divided into three groups, depending on the broad nature
of the elements they show.
Module structures.
Each unit of implementation is a module. Modules lead to code-based representation. Each module
is assigned a functional responsibility. Modules provide the functionality of each component of the
system but the emphasis on resulting software is limited.
Before developing module structures , you should be able to answer the following questions.
What is the primary functional responsibility assigned to each module?
What other software elements is a module allowed to use?
What other software does it actually use?
What modules are related to other modules by generalization or specialization (i.e.,
inheritance) relationships?
If you try to raise these questions prior to creation of module structures, you will also be able to observe
that quality factors are inculcated into the system by default.
Component-and-connector structures.
The unit of elements are runtime components (which are the principal units of computation) and
connectors (which are the communication vehicles among components).
When you develop a Component-and-connector structures ,you should be able to answer questions
such as
What are the major executing components and how do they interact?
What are the major shared data stores?
Which parts of the system are replicated?
How does data progress through the system?
What parts of the system can run in parallel?
How can the system's structure change as it executes?
Note that the unit of elements are components at runtime. Therefore you should think from the
perspective of component execution. The transition of these components ,the data they change or share,
the impact of other components due to this change , change of flow of execution etc. will play a major
role.
Allocation structures.
Allocation structures as the name suggests show the relationship between the software elements and
the elements that are allocated . The allocation of one or more external environments in which the
software is created and executed. These structures may rise questions such as
What processor does each software element execute on?
In what files is each element stored during development, testing, and system building?
What is the assignment of software elements to development teams?
The allocation of other external units required for the smooth execution of the software is the main aim
of these structures.
Therefore you should be able to keep in mind, the mix-up of these structures will lead to a poor
architectural decision.
These three structures correspond to the three broad types of decision that architectural design
involves:
How is the system to be structured as a set of code units (modules)?
How is the system to be structured as a set of elements that have runtime behavior
(components) and interactions (connectors)?
How is the system to relate to non-software structures in its environment (i.e., CPUs, file
systems, networks, development teams, etc.)?
Module
Decomposition:
Decomposition is the process of breaking down the modules into sub modules. This can happen
only if the module is of complex in nature and there is a possibility of many sub-modules involved.
The units are modules related to each other by the "is a submodule of " relation, showing how
larger modules are decomposed into smaller ones recursively until they are small enough to be easily
understood.
Modules in this structure represent a common starting point for design, as the architect
enumerates what the units of software will have to do and assigns each item to a module for
subsequent (more detailed) design and eventual implementation. Modules often have associated
products (i.e., interface specifications, code, test plans, etc.).
The decomposition structure provides a large part of the system's modifiability, by ensuring that
likely changes fall within the purview of at most a few small modules. It is often used as the basis for the
development project's organization, including the structure of the documentation, and its integration
and test plans.
The units in this structure often have organization-specific names. Certain U.S. Department of
Defense standards, for instance, define Computer Software Configuration Items (CSCIs) and Computer
Software Components (CSCs), which are units of modular decomposition.
Uses:
The units of this important but overlooked structure are also modules, or (in circumstances
where a finer grain is warranted) procedures or resources on the interfaces of modules. The units are
related by the uses relation.
One unit uses another if the correctness of the first requires the presence of a correct version
of the second. The uses structure is used to engineer systems that can be easily extended to add
functionality or from which useful functional subsets can be easily extracted.
Layered:
You have a layer designed for abstractions so that they can be coherently connected and this
will ensure portability. The focus of abstraction should be at the same level. For example observe the
following fig.
The abstraction of each layer and whenever required an expansion in the abstraction can be
done. Henceforth from the networking perspective, this will confirm portability.
When the uses relations in this structure are carefully controlled in a particular way, a system of
layers emerges, in which a layer is a coherent set of related functionality.
Data hiding
Class, or Generalization:
The module units in this structure are called classes. The relation is "inherits-from" or
"is-an-instance-of." This view supports reasoning about collections of similar behavior or capability (i.e.,
the classes that other classes inherit from) and parameterized differences which are captured by sub
classing. The class structure allows us to reason about re-use and the incremental addition of
functionality.
View - reusability
Incremental development
Component-and-Connector
DHCP
DHCP Discover ( message) Server
functionality control
Concurrency:
Concurrency as the name suggests the concurrent execution of the run time components. This
paves for parallelism and multithreading . This may again lead to proper allocation of resources and
their order of allocation.
A logical thread is a sequence of computation that can be allocated to a separate physical thread
later in the design process. The concurrency structure is used early in design to identify the
requirements for managing the issues associated with concurrent execution.
threading
For example, Simple Network Management Protocol (SNMP) that reads and writes various
pieces of state information on different network nodes. The Management Information Base (MIB)
maintains these information
The information are fetched /shared among the manageable network nodes.
View - performance,
data integrity
Client-server:
If the system is built as a group of cooperating clients and servers, this is a good
component-and-connector structure to illuminate. The components are the clients and servers, and the
connectors are protocols and messages they share to carry out the system's work. This is useful for
separation of concerns (supporting modifiability), for physical distribution, and for load balancing
(supporting runtime performance).
Client program
Servlet
The client program interpreted by the browser requests the server and if the application is a servlet
enabled application, the servlet is executed and the response send back to the client program.
Deployment:
Consider a typical example of Remote Procedure Call (RPC) for invoking a remote procedure in
a distributed network.
For a typical remote procedure invocation, what are the various mechanisms that should be
taken into consideration is the core focus of this structure.
Relations are "allocated-to," showing on which physical units the software elements reside, and
"migrates-to," if the allocation is dynamic. This view allows an engineer to reason about performance,
data integrity, availability, and security. It is of particular interest in distributed or parallel systems.
Implementation:
This structure shows how software elements (usually modules) are mapped to the file
structure(s) in the system's development, integration, or configuration control environments.
View - management of
developmental activities, build process
Work assignment:
This structure assigns responsibility for implementing and integrating the modules to the
appropriate development teams. The architect will know the expertise required on each team. Also, on
large multi-sourced distributed development projects, the work assignment structure is the means for
calling out units of functional commonality and assigning them to a single team, rather than having them
implemented by everyone who needs them.
supportability
In general, all structures may not be applicable for all projects. The structures should be chosen as per
the choice of requirement. The structures should be chosen with respect to the views. Some structures
may be strictly designed for a system. A combination of structures termed as a hybrid structure may also
be a possible option but not always.
Unified Modeling Language ( UML ) makes its main contribution in a view's primary presentation,
and its secondary contribution in the behavior of an element or group of elements. This can be used by the
architect for augmenting the necessary supporting documentation to prove the rationale behind your choice. UML
provides no direct support for components, connectors, layers, interface semantics, or many other aspects of a
system that are supremely architectural. But the notations help in primary presentation which will stimulate the
development process.
We have already discussed that a module is a code or unit of implementation. A module view is an enumeration of
modules together with their interfaces and their relations.
Module view in UML is represented as
A
B <<subsystem>>
Uses View
Decomposition view
Package decomposed into modules B,C & D
Class A is a composition of class B,C & D
Layered View
As the notation indicate , each layer is an abstraction of the package which is again decomposed into subclasses.
Each layer uses the other layer.
As we have discussed already , components are run-time components and connectors are the interfaces across
the components.
Process view
Runtime components
Connector View
Option1 : The lines between components are connectors to associate the classes. But the behavioral part of
the components unlike the classes should be taken into consideration.
Option2 : Interfaces as annotations. In the sense additional information can be provided about the connection.
Option3 : Interfaces as class/object attributes will limit the overall process only with reference to class
definitions.
Option 4 : Interfaces as UML interfaces. The UML lollipop notation provides a compact description of an interface
in a class diagram depicting a component type
Option 5 : Interfaces as classes. A class within a component itself becomes the part of the connector which
improves the detailing of the association.
In UML, a deployment diagram is a graph of nodes connected by communication associations. Figure 9.13
provides an example. Nodes may contain component instances, which indicate that the component lives or runs
on the node. Components may contain objects, which indicate that the object is part of the component.
Components are connected to other components by dashed-arrow dependencies (possibly through interfaces).
This indicates that one component uses the services of another; a stereotype may be used to indicate the precise
dependency if needed. The deployment type diagram may also be used to show which components may run on
which nodes, by
using dashed arrows with the stereotype «supports».
A UML deployment diagram
The nodes are the run-time components and they are associated with the executable environment. The nodes are
also interrelated to other executable environment. For example ,in a Home surveillance system, the homeowner
accesses the surveillance system and the access approval is done on the server side.
Architectural viewpoints provide a framework for capturing reusable architectural knowledge that can be used to
guide the creation of a particular type of (partial) AD. In a relatively unstructured activity like architecture
definition, the idea of the viewpoint is very appealing. If we can define a standard approach, a standard language,
and even a standard meta model for describing different aspects of a system, stakeholders can understand any AD
that conforms to these standards once familiar with them.
A “4 + 1” as proposed by Kruchten is a perspective from 4 different ways along with an important that
provides a common thread for all the views. Kruchten makes us to understand that any system cannot or need not
have singular set of functionalities . A single representation of a system tries to accommodate a number of
factors and in the foray may lose its original focus.
Example : consider a networking application where there is a continuous flow of message from both end nodes.
What your system should represent from the above ?
Requirement : continuous flow of message without jitter.
Representation for the requirement
- What should your boxes represent?
Process/modules/source code/ running protocols/ layers/physical
nodes/logical nodes
- What should the connector represent?
Interface/dependency/control flow/data flow
- What aspect of the software development should be focused on?
Data engineering/quality factors/team organization/ development strategy
- Which stakeholder’s concern will be addressed by your design
Customer/manager/developer/system engineer/end-user/integrator
We need to address ALL OF THE ABOVE concern. The first point to be noted this cannot be developed
with a single architectural style. You need a multiple perspective through which these details can be organized.
Also you need a standard views which can be understood globally. This organizing of views from the logical,
process, development and physical . this is termed as the 4 views. Additional + 1 denotes the scenarios that can
travel along with all the views. As a result, kruchten has organized the description of a software architecture using
several concurrent views, each one addressing one specific set of concerns. This is shown in Fig.
4 + 1 view
Software architecture deals with the design and implementation of the high-level structure of the software.
The main aim is
-To assemble a certain number of architectural elements in some well-chosen forms to satisfy
the major functionality and performance requirements of the system
- to satisfy the non-functional requirements such as reliability, scalability, portability, and availability.
The formula as proposed by Perry and Wolfe and modified by Boehm defines software architecture
as a combination of
Software architecture = {Elements, Forms, Rationale/Constraints}
In order to address large complex system which is a challenge for making architectural decisions , Kruchten
has proposed these 4 + 1 views.
Logical view, which is the object model of the design.
Process view, which captures the concurrency and synchronization aspects of the design.
Physical view, which describes the mapping(s) of the software onto the hardware and reflects its
distributed aspect.
Development view, which describes the static organization of the software in its development
environment.
The description of an architecture—the decisions made—can be organized around these four views, and then
illustrated by a few selected use cases, or scenarios which become a fifth view. The architecture is in fact partially
evolved from these scenarios as we will see later.
Logical Architecture
Logical view notation is derived from the Booch notation. Architecturally significant which plays
a major role in architectural decisions is only taken into account. It is considerably simplified to take into
account only the items that are architecturally significant. That will be sufficient at this level of design.
Various tools ( Rational Rose®) can support the logical architecture design. The logical notations are
showed in Fig.
The top level class diagram contains 8 class categories such as the basic elements, their services, management
related to flight , traffic and simulation related aspects are carefully grouped into fine grained logical structures.
Process Architecture
For example,
Logical Networks can be
On-line operational system
Off-line operational system
Simulation/training
Training version of the software
These logical networks coexists as independent executable unit (process) and also communicate
through LAN/WAN , share physical resources and also support execution as a large single unit.
Process is a grouping of tasks that form an executable unit. Processes represent the level at which the
process architecture can be tactically controlled (i.e., started, recovered, reconfigured, and shut down).
In addition, processes can be replicated for increased distribution of the processing load, or for
improved availability. The various notations for process view are depicted in Fig.
As the process is an single executable unit, you have component notation. The connectors may take
different form for interaction through messages, invoke a remote procedure call, a
unidirectional/bidirectional transformation of messages, broadcasting of events which can be captured
by other connected components etc. these are the notations originally proposed by Booch et al. and
expanded by Kruchten.
Development Architecture
The organization of software modules on the actual executable environment is the focus of the
development view. The software is packaged in small chunks as program libraries, or subsystems— that
can be developed by one or a small number of developers. The subsystems are organized in a hierarchy
of layers, each layer providing a narrow and well-defined interface to the layers above it.
The complete development of the system requires the development of individual units and their
coordination with other elements. So you need a ‘import’ and ‘export relationship for the interaction of
messages across these unit of elements. The notation for the development view is shown in the
following Fig.
Physical Architecture
The physical view is the mapping of the hardware to the software units.
The initial focus is on the non-functional requirements of the system
such as
availability
reliability (fault-tolerance)
performance (throughput)
scalability.
The software executes on a network of computers, or processing nodes (or just nodes for short).
The various elements identified are
Networks
Processes
Tasks
Objects—need to be mapped onto the various nodes.
Henceforth, the mapping of the software to the nodes therefore should be highly flexible and have
a minimal impact on the source code itself. This is because you have different set of constraints with
respect to the environment taken into account.
Components
Processor
Other Devices
Connectors
Communication
unidirectional
Non-permanent
High bandwidth communication bus
The physical architecture are made simpler without or with mapping the process view as if we try
to show a very detailed mapping view ,this may tend to be messy from the overall perspective. Hence
we limit ourselves to a simpler set of options although there are varied set of notations available in the
literature for further reference.
Scenarios
The elements in the four views are shown to work together seamlessly by the use of a small set of
important scenarios —instances of more general use cases—for which we describe the corresponding
scripts (sequences of interactions between objects, and between processes) as described by Rubin and
Goldberg. The scenarios are in some sense an abstraction of the most important requirements. Their
design is expressed using object scenario diagrams and object interaction diagrams.
This view is redundant with the other ones (hence the “+1” but not 5 views), but it serves two main
purposes:
• as a driver to discover the architectural elements during the architecture design as we will
describe later
• as a validation and illustration role after this architecture design is complete, both on paper and
as the starting point for the tests of an architectural prototype.
Scenario Notation
The notation is very similar to the Logical view for the components (cf. fig. 2), but uses the
connectors of the Process view for interactions between objects (cf. fig. 4). Note that object instances
are denoted with solid lines. As for the logical blueprint, we capture and manage object scenario
diagrams using Rational Rose.
Similarity in Notations
Components from the Logical View
Connectors from the Process view
Solid lines for object instances
Object scenario diagrams from Rational Rose
-a passive object never invokes spontaneously any operations and has no control over the
invocation of its own operations by other objects
Step 3 : The terminal tells the controller to emit some dial tone
- a protected object never invokes spontaneously any operations but performs some arbitration on
the invocation of its operations.
This clustering proceeds until we have reduced the processes to a reasonably small number that
still allows distribution and use of the physical resources.
Outside-in:
Starting with the physical architecture:
identify external stimuli (requests) to the system
define client processes (request) and server’s processes that only provide services
do not initiate them
use the data integrity and serialization constraints to define the right set of servers
allocate objects to the client and servers agents
identify which objects must be distributed.
The result is a mapping of classes (and their objects) onto a set of tasks and processes of the
process architecture. Typically, there is an agent task for an active class, with some variations: several
agents for a given class to increase throughput, or several classes mapped onto a single agent because
their operations are infrequently invoked or to guarantee sequential execution.
Fig. 19 shows a small set of classes from some hypothetical air-traffic control system may be
mapped onto processes.
Mapping from logical to process view
From the Figure , you can observe the following:
The set of classes are
- flight
- clearance
- airspace
- sectorization
- location
- profile
Mapping with the agent tasks
the classes flight ,profile clearance mapped with flight agents
Agent handles
Multiple flights
High rate of external request
Critical response time
Load balancing across CPUs
Subordinate of flight class
Profile
Clearance
Sectorization of classes
Are for partitioning of airspace, hence integrity constraints are must
Hence handled by single agent
Shares the process with the server
Protected objects (shared by other objects)
Locations
airspace
static aeronautical information
mapped to their own server
From Logical to Development
A class is usually implemented as a module, for example as a user defined data type in Java.
-Large classes are decomposed into multiple packages
-Collections of class categories are grouped into subsystems.
Additional constraints for the definition of subsystems
- team organization
- expected magnitude of code(typically 5K to 20K SLOC per subsystem)
- degree of expected reuse and commonality
-strict layering principles (visibility issues)
- release policy
- configuration management.
Therefore, we usually end up with a view that does not have a one to one correspondence with
the logical view.
The logical and development views are very close, but address very different concerns. We have
found that the larger the project, the greater the distance between these views. Similarly, for the
process and physical views: the larger the project, the greater the distance between the views.
From process to physical
Processes and process groups are mapped onto the available physical hardware, in various
configurations for testing or deployment. Birmandescribes some very elaborate schemes for this
mapping in the Isis project.
The scenarios relate mostly to the logical view, in terms of which classes are used, and to the
process view when the interactions between objects involve more than one thread of control.
The Siemen’s model is the outcome of industrial practices of software architecture. The structures fall
into four broader categories
Conceptual
Module
Execution
Code structures.
As found in any other model, each category addresses different stakeholder concerns. As the system if addressed
from different views, will reduce the implementation complexity, improve the reusability and reconfiguration
factor. The software architecture documentation, which is the focus of the next chapter, will also play an
important role in addressing these issues.
Siemen’s four view with reference to the architectural considerations is shown in Fig. The software architecture
comprises of the conceptual view, module view, code view and execution view. The architecture is interfaced with
the software architecture and their feedbacks ensures the next level of processing in due course of the system
development.
Four Views
Further expansion of the four views is explored in the following fig. As you can observe , from the industrial
perspective you need a global analysis of all the views. The strategies need to be formulated which can further
mature into the core design tasks. These tasks include the components involved in the respective views. The
components are formulated as as set of tasks from the respective views. Then only the final design task is realized
after getting the respective feedbacks from all the other views. This elaboration is depicted along with the flow of
events that occurs in order to fine tune the overall view.
More focus is on the design approach and therefore the other stakeholder’s view may be limited. The four
views of this model are loosely coupled. The design flow follows the information passed between views starting
from the conceptual view. The feedback results from the testing of views for conformance to the nonfunctional
requirements of the system. Several important mappings of structures are explicitly defined in the design approach.
Conceptual structures are “implemented-by”module structures, and “assigned-to” execution structures. Module
structures can be “located-in” or “implemented-by” code structures. Code Structures can configure execution
structures. More emphasis is made on the design approach for the software architect.
1. Collect scenarios.
A set of use cases is developed to represent the system from the user’s
point of view.
2. Elicit requirements, constraints, and environment description.
This information is required as part of requirements engineering and is used to be
certain that all stakeholder concerns have been addressed.
3. Describe the architectural styles/patterns that have been chosen to address
the scenarios and requirements.
The architectural style(s) should be described using one of the following architectural views:
• Module view
for analysis of work assignments with components and the degree to which information
hiding has been achieved.
• Process view
for analysis of system performance.
• Data flow view
for analysis of the degree to which the architecture meets functional requirements.
4. Evaluate quality attributes by considering each attribute in isolation.
The number of quality attributes chosen for analysis is a function of the time available for
review and the degree to which quality attributes are relevant
to the system at hand.
Quality attributes for architectural design assessment include
reliability, performance, security, maintainability, flexibility, testability, portability, reusability,
and interoperability.
5. Identify the sensitivity of quality attributes to various architectural attributes for a specific
architectural style.
This can be accomplished by making small changes in the architecture and determining how sensitive a
quality attribute, say performance, is to the change. Any attributes that are significantly affected by
variation in the architecture are termed sensitivity points.
6. Critique candidate architectures (developed in step 3) using the sensitivity
analysis conducted in step 5.
There is an almost unlimited supply of views to choose from. As per the information to be rendered depending
upon the viewers there are multiple set of views which can be grouped logically. The views are broadly categorized
into three
1. Module views
describe how the system is to be structured as a set of code units.
3. Allocation views
describe how the system relates to non-software structures in its environment.
A particular view of a system is sure to fall into one of these categories or combine information from more
than one category.
View Styles
A view is a representation of a structure that is present in a software system. One might show the
hierarchical decomposition of the system’s functionality into modules or how the system is arranged
into layers; another might show how the system accomplishes work through communicating processes
or the interaction of clients and servers. Still another might show how software elements are deployed
onto hardware processing and communication nodes.
An architect chooses the structures to work with and designs them to achieve particular quality
attributes using architectural styles.
A style is a specialization of element types (e.g., “client,” “layer”) and relationship types (e.g., “is part
of,” “request-reply connection,” “is allowed to use”), along with any restrictions (e.g., “clients interact
with servers but not each other” or “all the software comprises layers arranged in a stack such that each
layer can only use software in the next lower layer”).
A standard view template is shown in the Fig. Adapted from [clements 99]
Summary
The architectural view forms the major part of the architectural definition. They are broadly classified
into Logical,process,development and physical view. Each view has a different set of component’s view
and their integration. This is due to the fact that the system developed has diverse domains and their
respective set of constraints. Therefore various authors propose various architectural views and hence
their own specifications. First we need to understand why do we need these different views and then
how to apply with our own system domain which will pave to the increased success rate of that
software.We have also discussed about the 4+1 views of the RUP, Siemen’s 4 views and SEI’s
perspective of the system.
Questions
References
ARCHITECTURAL STYLES
4.1 Introduction
Any e-governance system requires to understand the software development processes which is
clearly indicated in a life cycle. Several life-cycle models exist in the literature, a comprehensive life
cycle model , Evolutionary Delivery Life Cycle model is shown in Figure 4.2.
The intent of this model is to get user and customer feedback and iterate through several releases
before the final release. The model also allows the adding of functionality with each iteration and the
delivery of a limited version once a sufficient set of features has been developed.
4.2 Designing the Architecture
Designing architecture is to satisfy both quality requirements and functional requirements. This is
termed as Attribute-Driven Design (ADD). ADD takes as input a set of quality attribute scenarios and
employs knowledge about the relation between qualities attribute achievement and architecture in
order to design the architecture.
The ADD method can be viewed as an extension to most other development methods, such as the
Rational Unified Process. The Rational Unified Process has several steps that result in the high-level
design of an architecture but then proceeds to detailed design and implementation. Incorporating
ADD into it involves modifying the steps dealing with the high-level design of the architecture.
As shown in Fig.4.2 the initial software concept need to be materialized into the final steps by
following the series of process involved in terms of requirements analysis ,design, initial version of the
system and to evolve with reference to customer feedback.
The difference between an architecture resulting from ADD and one ready for implementation
rests in the more detailed design decisions that need to be made. These could be, for example, the
decision to use specific object-oriented design patterns or a specific piece of middleware that brings
with it many architectural constraints. The architecture designed by ADD may have intentionally
deferred this decision to be more flexible.
2. Documenting software architectures:
Documenting the architecture is the crowning step to crafting it. Even a perfect architecture is
useless if no one understands it or (perhaps worse) if key stakeholders misunderstand it. If you go to
the trouble of creating a strong architecture, you must describe it in sufficient detail, without ambiguity,
and organized in such a way that others can quickly find needed information. Otherwise, your effort
will have been wasted because the architecture will be unusable.
This principle is useful because it breaks the problem of architecture documentation into more
tractable parts, which provide the structure for the remainder of this chapter:
Choosing the relevant views
Documenting a view
Documenting information that applies to more than one view
Software architecture views are divided views into these three groups: module,
component-and-connector (C&C), and allocation. This three-way categorization reflects the fact that
architects need to think about their software in at least three ways at once:
How it is structured as a set of implementation units
How it is structured as a set of elements that have runtime behavior and interactions
How it relates to non-software structures in its environment
There is no industry-standard template for documenting a view, but the seven-part standard
organization that we suggest in this section has worked well in practice.
1. Primary presentation
2. Element catalog
3. Context diagram
4. Variability guide
5. Architecture background
6. Glossary of terms
7. Other information
Figure 3: The seven parts of a documented view
Hierarchical architecture is a form of control system where each component follows a distinct
level of hierarchy to be adhered to. It includes a set of devices and controlling software arranged in a
hierarchical tree. Hierarchical architecture is used in organization of the class libraries such as .NET
class library in namespace hierarchy.
In hierarchical architecture, the software system is decomposed into logical modules or subsystems at
different levels in the hierarchy. This architecture is used in designing system software such as
network protocols and operating system.
There are the three types of hierarchical architecture which is shown in Fig.4.3
Do note that the architectural style may differ for different system domains as discussed in Chapter 1.
4.3.1. Main-Subroutine
4.3.2. Master-slave
In master-slave architecture, slave provides duplicate services to the master and the
master chooses a particular result between slaves by a certain selection strategy.
It provides replicated services to the master.
Master-slave architecture is suitable for applications where reliability of software is
critical issue and can be implemented to minimize semantic errors.
This architecture has faster computation and easy scalability.
Master-slave architecture has limitations also, it is hard to implement, not all problems
can be divided and has portability issue.
Slave performs the same functional task by different algorithms and methods or totally
different functionality.
4.3.4. Virtual Machine
Layered Style
In Layered style, it decomposes the system into a number of higher and lower layers
and each layer has its responsibility.
Using layered architecture, applications involve distinct classes of services that can be
organized hierarchically and have clear divisions between core services, critical services,
user interface services etc.
Layered architecture design is based on incremental levels of abstraction.
It is implemented by using component-based technology which makes the system much
easier to allow for plug-and-play of new components.
Using layered architecture, it is easy to decompose the system based on the definition
of the tasks in a top-down refinement manner.
4.4 Data Flow Styles
SUB –STYLE
Pipes and Filters
Batch Sequential Processing
Dataflow: Pipe-and-Filter
What is meant by Pipe?
Pipe is a connector which passes the data from one filter to the next.
Pipe is a directional stream of data implemented by a data buffer to store all data, until
the next filter has time to process it.
It transfers the data from one data source to one data sink.
Pipes are the stateless data stream.
The above figure shows the pipe-filter sequence. All filters are the processes that run at
the same time, it means that they can run as different threads, co routines or be located on
different machines entirely.
Each pipe is connected to a filter and has its own role in the function of the filter. The
filters are robust where pipes can be added and removed at runtime.
Filter reads the data from its input pipes and performs its function on this data and
places the result on all output pipes. If there is insufficient data in the input pipes, the filter
simply waits.
Advantages
Pipe-filter provides concurrency and high throughput for excessive data processing.
It simplifies the system maintenance and provides reusability.
It has low coupling between filters and flexibility by supporting both sequential and
parallel execution.
Disadvantages
Example :
Batch Sequential Processing
Advantages
Simplicity
Severable executions
Disadvantage
No concurrency
No interaction between components software
Language-influenced styles
•Main program and subroutines (Hierarchical)
•Object-oriented
Layered
•Virtual machines
•Client-server
When you say call – return style , there will be a subroutine which processes the required task
and returns the output to the main program. The subroutines need to be designed such that
they are capable of processing relevant tasks.
Design elements
As this style focuses more on the returning outcomes, emphasis should be on careful
organization of the returning information and their appropriate handlers. This is
represented in Fig. 4.9. The sub routines fetches values from other subroutines, global
data , may be a part of recursion, invokes other subroutines and also the part of the
main program. We need to be careful enough while designing these styles as these
subroutines need to be integrated to serve a common purpose as illustrated in Fig.4.10.
Fig 4.9 Structure of Call and Return Architectures
Style Analysis
What are common examples of its use?
•Small programs, pedagogical uses
What are the advantages of using the style?
•Modularity: subroutines can be replaced as long as interface semantics are unaffected
What are the disadvantages of using the style?
•Usually fails to scale
•Inadequate attention to data structures
•Effort to accommodate new requirements: unpredictable
•Relation to programming languages/environments
•Traditional programming languages: BASIC, Pascal, C…
Object-Oriented architecture maps the application to real world objects for making it
more understandable.
It is easy to maintain and improves the quality of the system due to program reuse.
This architecture provides reusability through polymorphism and abstraction.
It has ability to manage the errors during execution. (Robustness)
It has ability to extend new functionality and does not affected on the system.
It improves testability through encapsulation.
Object-Oriented architecture reduces the development time and cost.
Basic idea:
1. Case Studies
Compiler View
The architecture of a system can change in response to improvements in technology. This can
be seen in the way we think about compilers. In the 1970s, compilation was regarded as a
sequential process, and the organization of a compiler was typically drawn as in Figure 4.14.
Text enters at
the left end and is transformed in a variety of ways
—to lexical token stream,
parse tree
intermediate code—before emerging as machine code on the right.
We often refer to this compilation model as a pipeline, even though it was (at least originally)
closer to a batch sequential architecture in which each transformation (“pass”) was completed
before the next one started.
In fact, even the batch sequential version of this model was not completely accurate. Most
compilers created a separate symbol table during lexical analysis and used or updated it during
subsequent passes. It was not part of the data that flowed from one pass to another but rather
existed outside all the passes.
So the system structure was more properly drawn as in Figure 4.15.
As time passed, compiler technology grew more sophisticated. The algorithms and
representations of compilation grew more complex, and increasing attention turned to the
intermediate representation of the program during compilation.
Improved theoretical understanding,
such as attribute grammars, accelerated this trend.
The consequence was that by the mid-1980s
the intermediate representation (for example, an attributed parse tree), was the centre of
attention.
It was created early during compilation
and manipulated during the remainder;
The data structure might change in detail, but it remained substantially one growing
structure throughout. However, we continued (sometimes to the present) to model the
compiler with sequential data flow as in Figure 4.16.
Classical example: Shared memory style is the way that systems were built for performance reasons until the
early 1970s. Shared memory style is not normally used today due to concerns with other qualities. (Shared
memory style does not easily scale up to large architectures.) The shared solution for KWIC system is shown in
the following figure.
Calls to circular shift and alphabetizer are implicit, and are the result of inserting lines.
Adding a function is easy: register it against the event of interest, and it is automatically called.
Advantage: modifiability to add new functions
Disadvantages: tracking control flow is difficult;
performance (implicit invocations will add overhead in tasking if there are many tasks); harder to analyze system
for deadlocks
Questions
1. Which architectural style goal is to achieve Integrability?
a) Data Flow Architecture
b) Call and Return Architecture
c) Data Centered Architectures
d) None of the mentioned
2. Which of the architectural style is further subdivided into Batch sequential
and Pipes & filters?
a) Data Flow Architecture
b) Call and Return Architecture
c) Data Centered Architectures
d) None of the mentioned
3. Which of the following type has the main goal to achieve performance?
a) Main program and subroutine Architecture
b) Remote Procedure Call system
c) Object Oriented or abstract data type system
d) All of the mentioned
4. Data Centered architecture is subdivided into which of the following
subtypes?
a) Repository and Blackboard
b) Batch Sequential, Pipes and Filters
c) All of the mentioned
d) None of the mentioned
5. In the context of architectural design, genre implies with a,
a) Specific category within the overall software domain
b) Software testing
c) Software maintenance
d) None of the mentioned above
6. An architectural Style Encompasses which of the following elements
a) Syntactic Models
b) Semantic Models
c) Data Models
d) Object oriented Models
7. Which architectural style goal is to achieve Modifiability with Scalability?
a) Data Flow Architecture
b) Call and Return Architecture
c) Virtual Machine Architecture
d) Event Style Architecture
8. Which of the following type has the main goal to achieve Modifiability?
a) Main program and subroutine Architecture
b) Remote Procedure Call system
c) Object Oriented or abstract data type system
d) Main program and subroutine Architecture, Object Oriented or abstract
data type system
9. To determine the architectural styles or Combination of styles that best fits
the proposed System, requirement engineering is used to uncover
a) Algorithmic Complexity
b) Characteristics and Constraints
c) Control and data
d) Design Patterns
10. In Architecture trade-off analysis method the architectural Style should be
described using
a) User View
b) Module view
c) Object view
d) Software view
Exercises
1. A large number of styles are designed to support the quality of modifiability. For any payment system , evaluate
styles of the different types of modification supported by them.
2. Styles, as they are commonly described, are the result of empirical observation, not a taxonomic organization
from first principles. As a result, overlap is high: Objects can be cooperating processes that can be layered,and so
on. Explore what architectural structures from chapter 3 are involved in the description of each style.
References
Communication is the key in architecture. Even the best architecture who cannot
communicate properly to the other stakeholders use it properly to do their jobs.
Documentation speaks for the architect. Even when the current architect leave it must be
easy for the new architect to understand it. Best architecture produces the best
documentation not because it’s required but for the stakeholders as the people who involved in
this undertaking are developers, deployers, testers and analyst. Documentation serves as the
receptacle to holds the results of design decisions as they are made. Documentation scheme
process of design go much more smoothly and systematically. Documentation helps the
architect while the architecting is in progress, whether in a six-month design phase or a six-day
agile sprint.
1. Write the documentation from the point of view of the reader not the writer.
2. Avoid unintentionally giving other thoughts than original ideas.
3. Make documentation in standard way therefore no standards idea how a docu-
ment should be so make it understandable.
4. Record logic and reasoning.
5. Avoid repetition.
6. Update the current progress and the faults in the system.
7. Review the document frequently whether it has the right purposes.
Introducing people to the system. The people may be new members of the team, external
analyst or even a new architect. The “new” person maybe even customers to whom you’re
showing the project for the first time.
2. Architecture serves as a primary vehicle for communication among stakeholders.
Architecture used as a communication tool with the stakeholders. Documenting an
architecture helps in the process of designing the architecture.
- Documentation provides dedicated compartments for recording various kinds of
design decisions.
- Documentation gives you rough but helpful way to gauge progress and the work
remaining.
- Documentation provides a framework for systematic attack on designing the
architecture.
Architecture tells implementers what to implement. Ability of the design to meet the systems
quality objectives and Architecture documentation serves as the feed for evaluation. Various
attributes like security, performance, usability, availability and modifiability are needed for
evaluation. Analyses of each one of these attributes have their own information needs. For
system builders who use automatic code-generation tools, the documentation may incorporate
the models used for generation.
ARCHITECTURE VIEWS
A view is a representation of a set of system elements and the relationships associated with
them.
Documenting an architecture is a matter of documenting the relevant views and then adding
documentation that applies to more than one view.
An element catalog that explains and defines the elements shown in the view and lists
their properties.
Information describing how the views relate to one another and to the system as a
whole.
Constraints and rationale for the overall architecture.
Such management information as may be required to effectively maintain the whole
package.
Short version to remember
1. Primary Presentation
2. Element Catalog
3. Context Diagram
4. Variability Guide
5. Architecture Background
6. Glossary of terms
7. Other Information
This principle is useful because it breaks the problem of architecture documents into more
traceable parts.
- Choosing the relevant views.
- Documenting a view.
- Documenting the information that applies to more than one view.
The 3 way categorization reflects the facts the architecture needs to thinks about their software
in at least three ways at once:
1. How it is structured as a set of implementation units.
2. How it is structured as a set of elements that have runtime behaviour and interactions.
3. How it is relates to non-software structures in its environment.
Choosing the view for your project by simple procedure:
Views present structural information about the system. Sometime, structural information is
not sufficient to allow reasoning about some system properties. View have sequencing
information, sequence of interactions among the elements. Behavior descriptions add
information that reveals the ordering of interactions among the elements, opportunities for
concurrency and time dependencies of interactions.
Statecharts are a formalism developed in 198-s for describing reactive systems. They add a
number of useful extensions to traditional state diagrams such as nesting of state and states,
which provide the expressive power to model abstraction and concurrency.
Statecharts allow reasoning about the totality of the system. all of the states are assumed to
be represented and the analysis techniques are general with respect to the system.
DOCUMENTING INTERFACES
An interface is a boundary across which two independent entities meet and interact or
communicate with each other. Our definition of software architecture made it clear that
elements interfaces-carriers of the properties externally visible to other elements are
architectural. Since you cannot perform analyses or system building without them ,
documenting interfaces is an important part of documenting architecture. Documenting an
interface consists of naming and identifying it and documenting its syntactic and semantic
information. The signature names the program and defines their parameters. A signature
information that you would find about the program. For e.g., in an elements C or C++ header
file or in a java interface.
Elements that occur as modules often correspond directly to one or more elements in a
component-and-connector view. The module and component-and-connector elements are
likely to have similar, if not identical, interfaces and documenting them in both places would
produce needles duplication.
1. Interface identity: When an element has an multiple interfaces. Identify the individuals to
distinguish them. This usually means naming them. You also need to provide a version
number.
2. Resource Provided: The heart of the document is the resources that the element provides.
At a minimum, the interfaces is named and the architecture can also specify signature
information.
Resource Syntax: This is the resources signature. The signature includes any
information another program will need to write a syntactically correct program that
uses the resource. The signature includes the resource name, names and logical data
types of arguments and so forth.
Resource semantics: This the result of invoking the resource. It might include –
assignment of values to data that the actor invoking the resources
access. It might be as simple s setting the value of a return argu-
ment or as far reaching as updating a central database.
-Events that will be signalled or messages that will be sent as a r-
esult of using the resource.
- how other resource will behave in the future as the result of to
using the resource. For e.g., if you ask a resource to destroy an
object, trying access that object in the future through other
resources will produce quite a different outcome (an error).
Resource Usage Restrictions: Perhaps data must be initialized before it can be read or a
particular method cannot be invoked unless another is invoked first. Only certain
resources or interfaces are accessible to certain actors to support a multi-level security
scheme.
3. Data type definitions: If any interface resources employ a data type other than one
provided by the underlying programming language, the architecture need to communicate
the definition of that data type, If it is defined by another element, then a reference to the
definition in that elements documentation is sufficient.
4. Exception definitions: These describe exceptions that can be raised by the resource on
the interface. Since the same exception might be raised by more than one resource, it is often
convenient to simply list each resources exceptions but define them in a dictionary collected
separately. This section is that dictionary. Common exception handling behaviour can also
be defined here.
5. Variability provided by the interface: These configuration parameters and how they
affect the semantics of the interface must be documented. Examples of variability include the
capabilities of visible data structures and the performance characteristics of underlying
algorithms. Name and provide a range of values for each configuration parameter and specify
the time when its actual value is bound.
6. Quality attribute characteristics of the interface: The architect needs to document what
quality attribute characteristics such as performance or reliability the interface makes known to
the elements users. This information maybe in the form of constraints on implementations of
elements that realize the interface which qualities you choose to concentrate on and make
promises about will depend on context.
7. Elements Requirements: What the element may be requires may be specific, named
resources provided by other elements. The documentation obligation is the same as for
resources provides syntax, semantics and any usage restrictions. Often it is convenient to
document information like this as a set of assumptions that the elements that will realize the
interface in which qualities you choose to concentrate on and make promises about will
depend on context.
8. Rationale and design issues: As with rationale for the architecture or architectural views t
large, the architect should record the reasons for an elements interface design. The rationale
should explain the motivation behind the design, constraints and compromises, what
alternative designs were considered and rejected and any insight the architect has about how
to change the interface in the future.
9. Usage Guide: For item 2 and item 7 document an elements semantic information on a
per resource basis. Sometimes it falls short of what is needed. In some cases semantics
need to be reasoned about in terms of how a board number of individual interactions
interrelate. A protocol is involved that is documented by considering a sequence of
interactions. This is similar to the view-level behaviors presented in the previous section, but
focused on a single element.
MODULE VIEW
Recall that a module is a code or implementation unit and a module view is an enumeration of
modules together with their interfaces and their relations.
INTERFACES
UML also allows a class symbol (box) to be stereotyped as an interface; the open-headed
dashed arrow shows that an element realizes an interface. The bottom of the class symbol can
be annotated with the interface's signature information: method names, arguments, argument
types, and so forth. The lollipop notation is normally used to show dependencies from elements
to the interface, while the box notation allows a more detailed description of the interface's
syntax, such as the operations it provides.
MODULES
UML provides a variety of constructs to represent different kinds of modules. UML has a class
construct, which is the object-oriented specialization of a module. Packages can be used in
cases where grouping of functionality is important, such as to represent layers and classes. The
subsystem construct can be used if a specification of interface and behavior is required.
In UML, the subsystem construct can be used to represent modules that contain other modules;
the class box is normally used for the leaves of the decomposition. Subsystems are used both as
packages and as classifiers. As packages, they can be decomposed and hence are suitable for
module aggregation. As classifiers, they encapsulate their contents and can provide an explicit
interface. Aggregation is depicted in one of three ways in UML:
This figure is about Decomposition in UML with nesting. The aggregate module is shown as a
package (left); decomposition in UML with arcs (right).
Composition is a form of aggregation with implied strong ownership-that is, parts live and die
with the whole. If module A is composed of modules B and C, then B or C cannot exist without A,
and if A is destroyed at runtime, so are B and C.
UML's composition relation has implications beyond the structuring of the implementation
units; the relation also endows the elements with a runtime property. As an architect, you
should make sure you are comfortable with this property before using UML's composition
relation.
GENERALIZATION
Expressing generalization is at the heart of UML in which modules are shown as classes
(although they may also be shown as subsystems). Figure shows the basic notation available in
UML. The two diagrams in Figure are semantically identical. UML allows an ellipsis (…) in place
of a submodule, indicating that a module can have more children than shown and that
additional ones are likely. Module Shape is the parent of modules Polygon, Circle, and Spline,
each of which is a subclass, child, or descendant of Shape. Shape is more general, while its
children are specialized versions.
DEPENDENCY
The most architecturally significant manifestation of dependency is found in layers. UML has
no built-in primitive corresponding to a layer. But it can represent simple layers
using packages, as shown in Figure. These are general-purpose mechanisms for organizing
elements into groups. UML has predefined packages for systems and subsystems. We can
introduce an additional package for layers by defining it as a package stereotype. A layer can be
shown as a UML package with the constraints that it groups modules together and that the
dependency between packages is "allowed to use." We can designate a layer using the package
notation with the stereotype name <<layer>> preceding the layer name, or introduce a new
visual form, such as a shaded rectangle.
COMPONENT-AND-CONNECTOR VIEWS
The full set of UML descriptive mechanisms is available to describe the structure, properties,
and behavior of a class, making this a good choice for depicting detail and using UML-based
analysis tools.
Properties of architectural components can be represented as class attributes or with
associations; behavior can be described using UML behavioral models; and generalization can
be used to relate a set of component types.
The semantics of an instance or type can also be elaborated by attaching one of the standard
stereotypes; for example, the «process» stereotype can be attached to a component to indicate
that it runs as a separate process.
Note that the relationship between MergeAndSort and its substructure is indicated using a
dependency relation.
INTERFACES
Interfaces to components, sometimes called ports. The figure describes in increasing order of
expressiveness. Expressiveness rises so do complexity, so you should pick the first strategy
that will serve your purposes.
4. Option 4: Interfaces as UML interfaces. The UML lollipop notation provides a compact
description of an interface in a class diagram depicting a component type. In an instance
diagram, a UML association role, corresponding to an interface instance and qualified by
the interface type name, provides a compact way to show that a component instance is
interacting through a particular interface instance. This approach provides visually
distinct depictions of components and interfaces, in which interfaces can clearly be seen
as subservient.
There are three reasonable options for representing connectors. Again, the choice is between
expressiveness and semantic match on the one hand and complexity on the other.
3. Option 3: Connector types as classes and connector instances as objects. One way to give
connectors first-class status in UML is to represent connector types as classes and
connector instances as objects. Using classes and objects, we have the same four
options for representing roles as we had for interfaces: not at all, as annotations, as
interfaces realized by a class, or as child classes contained by a connector class. Given a
scheme for representing interfaces, an attachment between a component's interface
and a connector's interface may be represented as an association or a dependency.
- Pipe and Filter view, filters are components and pipes are the connectors.
- In a shared-data view (repository/blackboard), the data repository and the assessors are
the components and the access mechanism are the connectors.
- In a client-server view, the components are clients and servers and the connectors are
the protocol mechanisms by which they interact.
Because components are the primary computational elements, connectors only pass processed
data of C&C view of a software architecture and they feature prominently in architectural
documentation.
Merits
Demerits
ADL is used for describing Formal Specifications for Modelling software architecture concepts.
ADL describes the structure and behavior of a software architecture. Structure has
components, connectors, interfaces, ports, channels, configurations, constraints and properties.
In Behavior we can learn how components and connectors behave, how they behave in
integration, constraints and properties.
Architectural Description Languages (ADLs) provide a means to model and analyze software
architectures in order to improve software quality and correctness. SSD supports the
adoption and standardization of ADL technology for industrial use by demonstrating its
applicability to challenging, current problems in industrial software.
Why do we use ADL…?
Necessity of using standardized architectural representation.
-ADLs bring standards for architecture description.
-UML for design
-Entity-relationship model for database
-Data flow diagram for analysis
-Using architectural styles for the structure
-pipe and filters
-client/server
-Using formal language
-components
-connectors
-Makes the architecture universally understandable
-designers
-programmers
-stakeholders
POSITIVES:
- ADLs support the routine use of existing designs and components in new application
systems.
- ADLs support the valuation of an application system before it is built.
- ADLs represent a formal way of representing architecture.
- ADLs are intended to be both human and machine readable.
- ADLs support describing a system at a higher level than previously possible.
- ADLs permit analysis of architectures – completeness, consistency, ambiguity and
performance.
- ADL can support automatic generation of design of software systems.
NEGATIVES:
- There is no universal agreement on what ADLs should represent particularly as regards
the behaviour of the architecture.
- Representations currently in use are relatively difficult to parse and are not supported
by commercial tools.
- Most ADL work today has been undertaken with academic rather than commercial goals
in mind.
- Most ADLs tend to be very vertically optimized towards a particular kind of analysis.
Research in ADLs
A number of experimental ADLs have been devised. These include:
- AESOP, UNICON and WRIGHT (Carnegie-Mellon University)
- Darwin & FSP (Imperial college London)
- Koala (Philips Research)
- ACME (Carnegie-Mellon University)
- Rapide (Stanford University)
- xArch/xADL (University of California, Irvine)
- Structural Architectural Description Language (SADL) (SRI International)
ACME was developed jointly by Monroe, garlan and wile. ACME is general purpose ADL
originally designed to represent simple interchange language. ACME as a language is
extremely simple befitting its origin as an interchange language. ACME has no native
behavioural specification facility so only syntactic linguistic analysis is possible. There are
currently efforts under consideration to define a behavioural semantics for ACME. ACME has
no native generation capability. ACME has seem some native tool development and there are
indications of more as well as use of other language tools via interchange.
Acme is built on a core ontology of seven types of entities for architectural representation:
components, connectors, systems, ports, roles, representations, and rep-maps. Of the seven
types, the most basic elements of architectural description are components, connectors, and
systems. These are illustrated in the below figures
1. Components represent the primary computational elements and data stores of a system.
They correspond to the boxes in box-and-line descriptions of software architectures. Typical
examples of components include such things as clients, servers, filters, objects, blackboards,
and databases.
2. Connectors represent interactions among components. Computationally speaking,
connectors mediate the communication and coordination activities among components.
Informally they provide the “glue" for architectural designs, and intuitively, they correspond to
the lines in box-and-line descriptions. Examples include simple forms of interaction, such as
pipes, procedure call, and event broadcast. But connectors may also represent more complex
interactions, such as a client-server protocol or a SQL link between a database and an
application.
The first figure shows a trivial architectural drawing containing a client and server component,
connected by an RPC connector.
SOA
• Uses open standards to integrate software assets as services
• Standardizes interactions of services
• Services become building blocks that form business flows
• Services can be reused by other applications
In other words SOA is an architectural pattern in computer software design in which application
components provide services to other components via a communications protocol typically
over a network. The principles of service-orientation are independent of any vendor, product
or technology.
SOA Benefits
Ability to build business applications faster and more easily
Its based on the assumption that the business services have been identified
correctly. All the business applications have to consume the correct services.
Lesser code and the developer has fewer things to know and worry. Lesser
code also means easier testing, and the application development process gets
shortened.
Easier maintenance / update
Less code is easier to maintain. Consumers of web services, it will not be
affected by changes in implementation of web services.
At a higher level, if a business process is modified, the equivalent business
service can be recomposed to adapt to the changes. The change will be
consistent throughout the organization.
For example if a new database is added to the data, the web service will just
include information from the new database in its response without the
developer having to do a single thing.
Business agility and extensibility
In an enterprise, business environment is rapidly changing. How fast an
enterprise can react to changes which has consequences to an organization. The
agility of enterprise system is demonstrated say when the requirements of a
composite service changes, all that needs to be done is to replace relevant
constituent services in order to update the composite service. Extensibility
comes in when a totally new business service needs to be implemented, all that
needs to be done is to assemble relevant services that already exists.
Lower total cost of ownership
All the benefits above translate to lower cost of ownership of IT infrastructure.
This logically follows from reusability of services. Not only the service is reused,
the IT infrastructure supporting these services is also being reused. Another cost
savings comes from the fact that the shorter time-to-market of business
applications also translates to better returns on the investment of IT
infrastructure.
Service is a reusable component that can be used as a building block to form larger, more
complex business-application functionality. A service may be as simple as “get me some
person data,” or as complex as “process a disbursement”. A service provides a discrete
business function that operates on data. Its job is to ensure that the business functionality is
applied consistently, returns predictable results, and operates within the quality of service
required. How the service is implemented, and how a user of the service accesses it, are
limited only by the SOA infrastructure choices of the enterprise. From a theory point of view,
it really doesn’t matter how a service is implemented.
In other words Service is software entity which is available in public domain that can be
discovered and invoked by other software systems.
Characteristics of service
Supports open standards for integration: Although proprietary integration
mechanisms may be offered by the SOA infrastructure, SOA’s should be based on open
standards. Open standards ensure the broadest integration compatibility
opportunities.
Loose coupling: The consumer of the service is required to provide only the stated
data on the interface definition, and to expect only the specified results on the interface
definition. The service is capable of handling all processing (including exception
processing).
Stateless: The service does not maintain state between invocations. It takes the
parameters provided, performs the defined function, and returns the expected result.
If a transaction is involved, the transaction is committed and the data is saved to the
database.
Location agnostic: Users of the service do not need to worry about the
implementation details for accessing the service. The SOA infrastructure will provide
standardized access mechanisms with service-level agreements.
Types of services
There are several types of services used in SOA systems.
Business services
Entity services
Functional services
Utility services
Business services
Business service can be defined as the logical encapsulation of business functions. It has to
be relevant to the business of the organization is running. Easy way to determine whether a
service is a business service is to ask whether the service can be created without the
consultation of business managers. If not, the service isn’t probably a business service.
One of the feature of business service is that it should have as little dependencies as
possible so that it can be reused easily throughout the organization. This reusability means
that there is consistency. Any change in business policy can be propagated throughout the
organization much more easily.
The concept of reusability in SOA refers to reusable high-level business services rather than
reusable low-level components. Its no easy way to identify appropriate business services in a
SOA. Both the IT and business departments to do that. Business services are not the only
services in SOA. A typical service model might include Entity Services, Task/Functional Services
and Utility/Process Services.
Entity services
An entity service usually represents business entities (e.g. Employee, Customer, Product,
Invoice etc.). Such entity service usually expose CRUD (create, read, update, delete) operations.
Functional services
Functional services do not represent business-related tasks or functions. Rather it usually can
be represented in a sequence diagram. In other words, it is usually a technology-oriented
service and not a business oriented one. Task services can be thought of as controller of
composition of services and hence its reusability is usually lower.
Utility services
Utility services offers common and reusable services that are usually not business centric. They
might include event logging, notifications exception handling etc.
SERVICE COMPOSITION
A key concept in SOA is service composition. By putting together several different services to
create a new service. Services in an SOA environment can be thought of as building blocks.
Service composition can only be achieved if services have a narrowly defined scope i.e. they do
just ‘one thing’. This is related to the idea of reusability.
WEB SERVICES
Web service is a realization of SOA. It’s important that the SOA is an architectural model that
is independent of any technology platform and Web Services the most popular SOA
implementation. As name says web services offers services over the web. It gives you a
description to web services and introduce various web terminologies.
- Hypertext transfer protocol [HTTP]
HTTP is a widely accepted standard that is implemented in many systems and
operating systems. It is able to address the issue of interoperability. By building
web services on HTTP, all the computers that are able connect to the internet
can become potential consumers of web services. By using the HTTPS protocol,
web service communication can be secured.
- Extensible Markup Language [XML]
XML was chosen as it is a platform-independent language that can be
understood by different systems.
- Web Services Description Language [WSDL]
In programming languages to call a function you need to know the method
signature. WSDL is analogous to such method signatures. A WSDL document
is written in XML, so any web service consumer can understand it and invoke the
service. It also includes where the service is located, the functions/methods
available for invocation, parameters and its data type as well as the format of
the response. The XML above shows a possible excerpt of a WSDL document
which defines a ConversionRate method that requires two parameters, namely
FromCurrency and ToCurrency of type Currency.
- SOAP
After knowing the method signature need to invoke it. It can be done by soap
messages. It stands for Simple Object Access Protocol. A SOAP message again
is written in XML and sent to the web service over HTTP for web service
consumption. The web service consumer or client will be able to construct the
correct SOAP messages based on the WSDL document. The response will also
be in SOAP format.
- W3C [World Wide Web Consortium]
It was founded in 1994. It has web architecture, document formats, interaction
and etc…
- OASIS [ Organization of Structured Information Standards]
Drives the development, convergence and adoption of E-business standards. It
has service register and publish, security, transaction and etc…
CLOUD COMPUTING
Cloud computing architecture consists of a front end and a back end. They connect to each
other through a network, usually the Internet. The front end is the side the computer user, or
client, sees. The back end is the “cloud” section of the system. A model of computation and
data storage based on “pay as you go” access to “unlimited” remote data center capabilities.
A cloud infrastructure provides a framework to manage scalable, reliable, on-demand access to
applications.
Cloud services provide the “invisible” backend to many of our mobile applications. High level
of elasticity in consumption.
NIST cloud definition “ The National Institute of Science and Technology (NIST) defines cloud
computing as a “pay per use model for enabling available, convenient and on-demand
network access to a shared pool of configurable computing resources (e.g. networks, servers,
storage, applications and services) that can be rapidly provisioned and released with minimal
management effort or service provider interaction”.
c. Resource pooling
The providers computing resources are pooled to serve multiple consumers using a
multi-tenant model with different physical and virtual resources dynamically assigned and
reassigned according to consumer demand.
d. Rapid Elasticity
Capabilities can be rapidly and elastically provisioned in some cases automatically to
quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities
available for provisioning often appear to be unlimited and can be purchased in any quantity at
any time.
e. Measured Service
Cloud systems automatically control and optimize resource usage by leveraging a
metering capability at some level of abstraction appropriate to the type of service. Resource
usage can be monitored, controlled and reported. By providing transparency for both the
consumer and providers of the service.
2. Network
The clouds are in the Internet, and the Internet is connected to many other things that add
value, including social networking sites, commerce APIs, mapping APIs, and other clouds.
3. Innovative
Innovative means that cloud computing and the solutions it provides now are new, modern
and innovative, and it will continue to have a lot of innovative features that provide a lot of
value for the money invested.
4. Expandability
You can add as much capacity as you need, when you need it, just by increasing spending.
There is no need to place of hardware and software in the wings just waiting for an opportunity
to go into production.
5. Speed of Implementation
The time to implementation of cloud computing can be vary as per circumstances. You are
not purchasing hardware, installing operating systems, or getting permission to take a portion
of a data center.
The main issues in the cloud computing architecture are security issues. The following issues
are concerned with cloud computing
Security
Control
Openness
Compliance
Service level agreements
Application
Communications (HTTP,XMPP)
Security (OAuth, OpenID, SSL/TLS)
Client
Browsers (AJAX)
Offline (HTML 5)
Infrastructure
Virtualization (OVF)
Platform
Solution stacks (LAMP, Space-based architecture)
Service
Data (XML, JSON)
Web Services (REST)
TYPES OF CLOUDS
1. PUBLIC CLOUDS
A public cloud is built over the Internet, which can be accessed by any user who has paid
for the service. Public clouds are owned by service providers.
They are accessed by subscription. Many companies have built public clouds, namely
Google App Engine, Amazon AWS, Microsoft Azure, IBM Blue Cloud, and Salesforce Force.com.
These are commercial providers that offer a publicly accessible remote interface for
creating and managing VM instances within their proprietary infrastructure.
A public cloud delivers selected set of business processes. The application and
infrastructure services are offered quite flexible price per use basis.
2. PRIVATE CLOUDS
The private cloud is built within the domain of an intranet owned by a single organization.
Therefore, they are client owned and managed. Their access is limited to the owning
clients and their partners.
Their deployment was not meant to sell capacity over the Internet through publicly
accessible interfaces.
Private clouds give local users a flexible and agile private infrastructure to run service
workloads within their administrative domains.
A private cloud is supposed to deliver more efficient and convenient cloud services.
They may impact the cloud standardization, while retai8ning greater customization and
organizational control.
3. HYBRID CLOUDS
A hybrid cloud is built with both public and private clouds.
Private clouds can also support a hybrid cloud model by supplementing local infrastructure
with computing capacity from an external public cloud.
For example, the research compute cloud (RC2) is a private cloud built by IBM.
The RC2 interconnects the computing and IT resources at 8 IBM Research Centers scattered
in US, Europe, and Asia. A hybrid clouds provides access to client, partner network, and third
party.
Public clouds promotes standardization, preserves capital investigation and offers
application flexibility. The private clouds attempt to achieve customization and offer higher
efficiency, resiliency, security, and privacy. The hybrid clouds operates in the middle way with
compromises.
The above figure shows the classes of clouds and their analogy to training services.
The below figure Public, private, and hybrid clouds over the Internet and intranets. The callout
box shows the architecture of a typical public cloud. A private cloud is built within an intranet. A
hybrid cloud involves both public and private clouds in its range. Users access the clouds from a
web browser or through a special application programming interface (API).
Layers architecture of Cloud Computing
In the layers architecture of Cloud computing, cloud service providers into three categories:
o Software as a service
-Software as a service provides a complete web application offered as a service
on demand.
-We can access any web applications like that web services, google mapping API,
Flickr API etc.
o Platform as a service
-To wrapped layers of software and provide services as Platform that can be
used to build higher-level services. There are at least two perspectives.
-One is the platform by integrating an OS, middleware, application software, and
even a development environment that is then provided to a customer as a
service.
-To encapsulated service where applications are developed using a set of
programming languages and tools provider through an API. The customer
interacts with the platform through the API
o Infrastructure as a service
-Infrastructure as a service to delivers basic storage and standardized services over
the network.
-Servers, storage systems, switches, route and other systems are pooled and made
available to handle workloads that range from application components to
high-performance computing applications.
-use the resources to deploy and run their applications. Low level
of abstraction that allows users to access the underlying infrastructure through the
use of virtual machines.
The above figure shows the Layers Architecture of Cloud Computing.
The below figure shows that IaaS provides virtualized infrastructure at user’s costs. The PaaS is
applied at the platform application level. The SaaS provides specific software support for users
at web service level. DaaS (Data as a Service) applies the status database and distributed file
system.
ADAPTIVE STRUCTURE
Adaptive architecture is a system which changes its structure, behaviour or resources according
to demand.
The adaption made is usually to non-functional characteristics rather than functional ones.
Case Study
The original proposal for the Web came from Tim Berners-Lee, a researcher with the European Laboratory for
Particle Physics (CERN), who observed that the several thousand researchers at CERN formed an evolving human
"web." People came and went, developed new research associations, lost old ones, shared papers, chatted in the
hallways, and so on, and Berners-Lee wanted to support this informal web with a similar web of electronic
information.
In 1989, he created and circulated throughout CERN a document entitled Information Management: A Proposal. By
October of 1990 a reformulated version of the project proposal was approved by management, the name World
Wide Web was chosen, and development began.
Figure A1.1 shows the elements of the ABC as they applied to the initial proposal approved by CERN
management.
The system was intended to promote interaction among CERN researchers (the end users) within the constraints
of a heterogeneous computing environment. The customer was CERN management, and the developing
organization was a lone CERN researcher. The business case made by Berners-Lee was that the proposed system
would increase communication among CERN staff. This was a very limited proposal with very limited (and
speculative) objectives. There was no way of knowing whether such a system would, in fact, increase
communication. On the other hand, the investment required by CERN to generate and test the system was also
very limited: one researcher's time for a few months.
The technical environment was familiar to those in the research community, for which the Internet had been a
mainstay since its introduction in the early 1970s. .
Hypertext systems had had an even longer history, beginning with the vision of Vannevar Bush in the 1940s. Bush's
vision had been explored throughout the 1960s and 1970s and into the 1980s, with hypertext conferences held
regularly to bring researchers together. However, Bush's vision had not been achieved on a large scale by the
1980s: The uses of hypertext were primarily limited to small-scale documentation systems. That was to change.
CERN management approved Berners-Lee's proposal in October 1990. By November he had developed the first
Web program on the NeXT platform, which meant he clearly had begun working on the implementation before
receiving formal management approval.
This loose coupling between management approval and researcher activity is quite common in research
organizations in which small initial investments are required. By their nature, research organizations tend to
generate projects from the bottom up more often than commercial organizations do, because they are dependent
on the researchers' originality and creativity and allow far more freedom than is typical in a commercial
organization.
The initial implementation of a Web system had many features that are still missing from more recent Web
browsers. For example, it allowed users to create links from within the browser, and it allowed authors and
readers to annotate information. Berners-Lee initially thought that no user would want to write HyperText Markup
Language (HTML) or deal with uniform resource locators (URLs). He was
wrong. Users have been willing to put up with these inconveniences to have the power of publishing on the Web.
The World Wide Web, as conceived and initially implemented at CERN, had several desirable qualities. It was
portable, able to interoperate with other types of computers running the same software, and was scalable and
extensible. The business goals of promoting interaction and allowing heterogeneous computing led to the quality
goals of remote access, interoperability, extensibility, and scalability,
which in turn led to libWWW, the original software library that supported Web-based development and a
distributed client-server architecture.
The realization of these properties in the original software architecture created an infrastructure that effectively
supported the Web's tremendous growth (see Table 1.1). libWWW embodies strict separation of concerns and
therefore works on virtually any hardware and readily accepts new protocols, new data formats, and new
applications. Because it has no centralized control, the Web appears to be able to grow without bounds.
There is no explicit requirement for ease of use in the original requirements, and it was not until the development
of point-and-click browsers that the Web began its tremendous growth. On the other hand, the requirement for
portability and the heterogeneous computing environment led to the introduction of the browser as a separate
element, thereby fostering the development of more sophisticated browsers.
The initial set of requirements for the Web, as established in the original project proposals, were as follows:
Remote access across networks.
Any information had to be accessible from any machine on a CERN network.
Heterogeneity.
The system could not be limited to run on any specific hardware or software platform.
Noncentralization.
In the spirit of a human web and of the Internet, there could not be any single source of data or
services.
This requirement was in anticipation that the Web would grow. The operation of linking to a document, in
particular, had to be decentralized.
Access to existing data.
Existing databases had to be accessible.
Ability for users to add data.
Users should be able to "publish" their own data on the Web, using the same interface used to
read others' data.
Private links.
Links and nodes had to be capable of being privately annotated.
Bells and whistles.
The only form of data display originally planned was display on a 24 x 80 character ASCII terminal.
Graphics were considered optional.
Data analysis.
Users should be able to search across the various databases and look for anomalies, regularities,
irregularities, and so on. Berners-Lee gave, as examples, the ability to look for undocumented software and
organizations with no people.
Live links.
Given that information changes all the time, there should be some way of updating a user's view of it. This
could be by simply retrieving the information every time the link is accessed or (in a more sophisticated fashion) by
notifying a user of a link whenever the information has changed.
In addition to these requirements, there were a number of non requirements identified. For example, copyright
enforcement and data security were explicitly mentioned as requirements that the original project would not deal
with. The Web, as initially conceived, was to be a public medium. Also, the original proposal explicitly noted that
users should not have to use any particular markup format.
Other criteria and features that were common in proposals for hypertext systems at the time but that were
missing from the Web proposal are as follows:
Controlling topology
Defining navigational techniques
and user interface requirements,
including keeping a visual history
Having different types of links to express differing relationships among nodes
Although many of the original requirements formed the essence of what the Web is today, several were not
realized, were only partially realized, or their impact was dramatically underestimated. For example, data analysis,
live links, and private link capabilities are still relatively crude to this day. These requirements have gone largely
unfulfilled. Adaptation and selective postponement of requirements are characteristic of unprecedented systems.
Requirements are often lists of desirable characteristics, and in unprecedented systems the tradeoffs required to
realize these requirements are often unknown until a design exists. In the process of making the tradeoffs, some
requirements become more important and others less so. The effect of one of the requirements turned out to
have been greatly underestimated. Namely, the "bells and whistles" of graphics dominate much of today's Web
traffic. Graphics today carry the bulk of the interest and consume the bulk of the Internet traffic generated by the
Web. And yet Berners-Lee and CERN management did not concern themselves with graphics in the initial proposal,
and the initial Web browser was line oriented. Similarly, the original proposal showed little interest in multimedia
research for supporting sound and video.
Some non requirements, as the ABC has been traversed, have also become requirements. Security, for one, has
proven to be a substantial issue, particularly as the Web has become increasingly dominated by commercial traffic.
The security issue is large and complex, given the distributed, decentralized form of the Internet. Security is
difficult to ensure when protected access to private data cannot be guaranteed—the Web opens a window onto
your computer, and some uninvited guests are sure to crawl through. This has become even more relevant in
recent years as e-commerce has begun to drive the structure and direction of the Web and a large number of ad
hoc mechanisms have been created to facilitate it. The most obvious is simple encryption of sensitive data,
typically via SSL
(Secure Sockets Layer), seen in Web browsers as HTTPS (HyperText Transfer Protocol Secure). But this protocol
only decreases the likelihood of others snooping on your private data while it is being transmitted over a public
network.
In addition to its enormous growth, the nature of the Web has changed. Although its beginnings were in the
research community, it is increasingly dominated by commercial traffic (as indicated by Internet hosts whose
names end in ".com"). The percentage of .com sites has leveled out at around 55%, but this is due mainly to the
rise of other domains, such as .net and .biz, rather than to any decline in commercial activity.
The advent of easy, widespread access to the Web has had an interesting side effect. Easy access to graphics in a
distributed, largely uncontrolled fashion has spawned the "cyberporn" industry, which has led to a new
requirement: that content be labeled and access to content be controllable. The result is the platform for Internet
content selection (PICS) specification, an industry-wide set of principles, and vendor implementations of them,
that allows the labeling of content and flexible selection criteria. In this way, content producers are not limited in
what they provide, but content consumers can tailor what they view or what they permit others to view according
to their own
tastes and criteria. For example, a parent can prevent a child from viewing movies other than those suitably rated,
and an employer can prevent an employee from accessing non-business-related sites during business hours.
Architectural Solution
The basic architectural approach used for the Web, first at CERN and later at the World Wide Web Consortium
(W3C), relied on clients and servers and a library (libWWW) that masks all hardware, operating system, and
protocol dependencies. Figure A2 shows how the content producers and consumers interact through their
respective servers and clients. The producer places content that is described in HTML on a server machine. The
server communicates with a client using the HyperText Transfer Protocol (HTTP). The software on both the server
and the client is based on libWWW, so the details of the protocol and the dependencies on the platforms are
masked from it. One of the elements on the client side is a browser that knows how to display HTML so that the
content consumer is presented with an understandable image.
Figure A2. Content producers and consumers interact through clients and servers
We now go into more detail about both the libWWW and the client-server architecture used as the basis for the
original Web and that still largely pervades Web technologies. We will discuss how the architecture of the Web
and Web-based software have changed in response to the e-commerce revolution.
As stated earlier, libWWW is a library of software for creating applications that run on either the client or the
server. It provides the generic functionality that is shared by most applications: the ability to connect with remote
hosts, the ability to understand streams of HTML data, and so forth.
libWWW is a compact, portable library that can be built on to create Web-based applications such as clients,
servers, databases, and Web spiders. It is organized into five layers, as shown in Figure A3.
Figure A3. A layered view of libWWW
The generic utilities provide a portability layer on which the rest of the system rests. This layer includes basic
building blocks for the system such as network management, data types such as container classes, and string
manipulation utilities. Through the services provided by this layer, all higher levels can be made platform
independent, and the task of porting to a new hardware or software platform can be almost entirely contained
within the porting of the utilities layer, which needs to be done only once per platform.
The core layer contains the skeletal functionality of a Web application—network access, data management and
parsing, logging, and the like. By itself, this layer does nothing. Rather, it provides a standard interface for a Web
application to be built upon, with the actual functionality provided by plug-in modules and call-out functions that
are registered by an application.Plug-ins are registered at runtime and do the actual work of the core
layer—sending and manipulating data. They typically support protocols, handle low-level transport, and
understand data formats. Plug-ins can be changed dynamically, making it easy to add new functionality or even to
change the very nature of the Web application.
Call-out functions provide another way for applications to extend the functionality provided in the core layer. They
are arbitrary application-specific functions that can be called before or after requests to protocol modules. What is
the relationship between the generic utilities and the core? The generic utilities provide platform-independent
functions, but they can be used to build any networked application.
The core layer, on the other hand, provides the abstractions specific to building a Web
application.
The stream layer provides the abstraction of a stream of data used by all data transported between the
application and the network.
The access layer provides a set of network-protocol-aware modules. The standard set of protocols that libWWW
originally supported are
HTTP—the underlying protocol of the World Wide Web; Network News Transport Protocol (NNTP)—the protocol
for Usenet messages;
Wide Area Information Server (WAIS)—a networked information retrieval system; File Transfer Protocol (FTP),
TELNET, rlogin, Gopher, local file system, and TN3270. Many of these are becoming rare, but others, such as HTTPS
(HTTP Secure) have been added.
It is relatively simple to add new protocol modules because they are built upon the abstractions of the lower
layers.
The uppermost layer, consisting of the Web application modules, is not an actual application but rather a set of
functionality useful for writing applications. It includes modules for common functionality, such as caching, logging,
and registering proxy servers (for protocol translation) and gateways (for dealing with security firewalls, for
example); history maintenance, and so on.
As a result of building libWWW and the many applications that rest on it, several lessons have been learned. These
lessons have derived in part from the developers' experience in trying to meet the requirements . Web-based tools
be heterogeneous, support remote access across networks, be noncentralized, and so forth. However, the
requirement that turned out to be the most challenging was supplying unforeseen bells and whistles. That is,
allowing the features of Web-based applications to grow has driven many decisions in libWWW and has led to the
following lessons:
Formalized application programming interfaces (APIs) are required. These are the interfaces that present the
functionality of libWWW to the programs built on top of it. For this reason, APIs should be specified in a
language-independent fashion because libWWW is meant to support application development on a wide variety of
platforms and in many languages. Functionality and the APIs that present it must be layered. Different applications
will need access to different levels of service
abstraction, which are most naturally provided by layers.
The library must support a dynamic, open-ended set of features. All of these features must be replaceable, and it
must be possible to make replacements at runtime. Processes built on the software must be thread safe.
Web-based applications must support the ability to perform several
functions simultaneously, particularly because operations such as downloading large files over a slow
communication link may take a considerable amount of real time. This requires the use of several simultaneous
threads of control. Thus, the functionality exposed by the APIs must be safe to use in a threaded environment. It
turns out that libWWW does not support all of these goals as well as it might. For example, the libWWW core
makes some assumptions about essential services, so not all features can be dynamically replaced. Furthermore,
libWWW is meant to run on many different
platforms, and so it can not depend on a single-thread model. Thus, it has implemented pseudothreads, which
provide some, but not all, of the required functionality. Finally, most current Web applications do not support
dynamic feature configuration; they require a restart
before new services can be registered.
Figure A4 shows a deployment view of a typical Web client-server that was built using libWWW services. A module
decomposition view is also shown for the HTTP client and server components of the deployment view. The figure
makes a few points about libWWW.
First, not all parts of a client-server are built from it. For example, the user interface is independent. Second, the
names of the managers do not directly correspond to the names of the layers: Although the access manager,
protocol manager, and stream manager are clearly related to the access and stream layers, the cache manager
uses the services of the application layer. The stream managers in the client-server pair manage the low-level
communications, thus ensuring transparent communication across a network for the other parts of the system.
Figure A4.
Deployment view of a Web client-server with a module decomposition view of the
HTTP client and server components
The user interface (UI) manager handles the look-and-feel of the client's user interface. However, given the
open-ended set of resources that a WWW system can handle, another element, the presentation manager, can
delegate information display to external programs (viewers) to view resources known by the system but that the
UI manager does not directly support. For example, most Web viewers use an external program to view PostScript
or .pdf files. This delegation is a compromise between the competing desires of user interface integration (which
provides for a consistent look-and-feel and hence better usability) and extensibility.
The UI manager captures a user's request for information retrieval in the form of a URL and passes the information
to the access manager. The access manager determines if the requested URL exists in cache and also interprets
history-based navigation (e.g., "back"). If the file is cached, it is retrieved from the cache manager and passed to
the presentation manager for display to either the UI or an
external viewer. If it is not cached, the protocol manager determines the type of request and invokes the
appropriate protocol suite to service it. The client stream manager uses this protocol for communicating the
request to the server. Once it receives a response from the server in the form of a document, this information is
passed to the presentation manager for appropriate display. The presentation manager consults a static view
control configuration file (mimerc, mailcap, etc.) to help it map document types to external viewers.
The HTTP server ensures transparent access to the file system—the source of the documents that the Web exists
to transfer. It does this either by handling the access directly (for known resource types) or through a proxy known
as common gateway interface (CGI). CGI handles resource types that a native server cannot handle and handles
extension of server functionality, as will be discussed next. Before
these extensions, the available WWW servers implemented a subset of defined HTTP requests, which allowed for
the retrieval of documents, the retrieval of document meta-information, and server-side program execution via
CGI.
When a request is received by the server stream manager, its type is determined and the path of the URL is
resolved via the path resolver. The HTTP server consults an access list to determine if the requesting client is
authorized for access. It might initiate a password authentication session with the client to permit access to
secured data. Assuming authentication, it accesses the file system (which is
outside the server boundary) and writes the requested information to the output stream. If a program is to be
executed, a process is made available (either new or polled) through CGI and the program is executed, with the
output written by the server stream manager back to the client.
In either case, CGI is one of the primary means by which servers provide extensibility, which is one of the most
important requirements driving the evolution of Web software. CGI became such an important aspect of
Web-based applications that we now discuss this topic at greater length.
Most information returned by a server is static, changing only when modified on its home file system. CGI scripts,
on the other hand, allow dynamic, request-specific information to be returned. CGI has historically been used to
augment server functionality: for input of information, for searches, for clickable images. The most common use of
CGI, however, is to create virtual documents—documents that are dynamically synthesized in response to a user
request. For example, when a user looks for something on the Internet, the search engine creates a reply to the
user's search request; a CGI script creates a new HTML document from the reply and returns it to the user.
CGI scripts show the flexibility of early architectures which were based on libWWW. In the above Figure. CGI is
shown as external to the HTTP server. CGI scripts are written in a variety of languages, some of which are compiled
(C, C++, Fortran) and some of which are interpreted (perl, VisualBasic, AppleScript, etc.). These scripts allow a
developer to extend a server's functionality arbitrarily and, in particular, to produce information that the server
will return to the user.
However, because scripts may contain any functionality written in C, perl, and so on, they represent an enormous
security hole for the system on which they are installed. For example, a script (which runs as a process separate
from the server) might be "tricked" into executing an arbitrary command on the host system on behalf of a remote
user. For this reason, server-side scripts such as CGI have led to a new requirement for increased security. The use
of HTTPS to address this requirement will be described in the next section.
Probably the most important additional feature that CGI brought to the Web architecture is that it allows users to
"put" information into the Web, in contrast to the "get" operation that servers normally provide. Although the
requirement to put in information was listed in the original World Wide Web project requirements, it has not been
fully achieved. CGI allows users to put information only in application-specific ways, such as adding it to a database
by filling out a form. CGI solved many problems inherent in the original design of libWWW—principally because it
provided much needed server extensibility to handle arbitrary resources, allowed users to put data in limited
ways—it also had several substantial shortcomings.
The security issue was one; another was portability. CGI scripts written in VisualBasic, AppleScript, and C Shell
work on Windows, Macintosh, and UNIX, respectively. These scripts cannot be (easily) moved from one platform to
another.
Table 1 describes how the Web achieved its initial quality goals of remote access, interoperability, extensibility,
and scalability.
The incredible success of the Web has resulted in unprecedented interest from business and hence unprecedented
pressure on the architecture, via the ABC. Business requirements have begun to dominate Web architecture.
Business-to-business and business-to-consumer Web sites have fueled most of the innovation in Web-based
software.
The original conception of the Web was as a web of documents, in keeping with its hypertext roots. E-commerce,
however, views the Web as a web of data, and these different views have led to some tensions. For example,
"pushing" data to a user is difficult; the most common technique for updating data is to reload it at specified
periods rather than to rely on the change of data to force a screen update. Another is the back button on a
browser, which in certain circumstances may result in stale data being displayed on a screen.
The new requirements of e-commerce are stringent and quite different from the original requirements presented
in previous Section :
High performance.
A popular Web site will typically have tens of millions of "hits" per day, and users expect low latency from it.
Customers will not tolerate the site simply refusing their requests.
High availability.
E-commerce sites are expected to be available "24/7." They never close, so must have minimal
downtime—perhaps a few minutes per year.
Scalability.
As Web sites grow in popularity, their processing capacity must be able to similarly grow, to both expand the
amount of data they can manage and maintain acceptable levels of customer service.
Security.
Users must be assured that any sensitive information they send across the Web is secure from snooping.
Operators of Web sites must be assured that their system is secure from attack (stealing or modifying data,
rendering data unusable by flooding it with requests, crashing it, etc.).
Modifiability.
E-commerce Web sites change frequently, in many cases daily, and so their content must be very simple to
change.
The architectural solution to these requirements is more about system architecture than simply software
architecture. The components that populate the system come from the commercial marketplace: Web servers and
Web clients of course, but also databases, security servers, application servers, proxy servers, transaction servers,
and so forth.
A typical reference architecture for a modern e-commerce system is shown in Figure A5. The browser/user
interaction function is usually fulfilled by a Web browser (but it could be a kiosk, a legacy system with a Web
connection, or some other Web-enabled device). The business rules and applications function is typically fulfilled
by application servers and transaction servers. The data services layer is typically fulfilled by a modern database,
although connections to legacy systems and legacy databases are also quite common. This scheme is often
referred to as an n-tier architecture (here, n = 3). A tier is a partitioning of functionality that may be allocated to a
separate physical machine.
Figure A5. An e-commerce reference architecture
A typical implementation of an e-commerce system architecture consists of a number of tiers, each consisting of a
coherent grouping of software (typically customized commercial components) and hardware. Such a configuration
is given in Figure A6, which shows how software is allocated to hardware.
The figure is annotated with the functional elements from Figure A5 to reinforce the notion that a single function
in the reference architecture may map to multiple tiers in a typical e-commerce architecture. The two parts of
Figure A4 occur here as elementary components: the Web browsers (clients) and the Web servers, respectively,
reflecting the evolution toward component-based systems in which the internal component structure is less
relevant.
We will now discuss each of the elements in Figure A6, along with the qualities that each helps to achieve.
WEB BROWSERS FOR MODIFIABILITY
An end user typically initiates a request for information by interacting with a Web browser. Modern Web browsers
support user interface modifiability in a wide variety of ways, the most obvious of which has not changed from the
inception of the Web: The user interface that the browser supports is not hardwired but it is specified via HTML. At
least, it used to be. Nowadays there are many other technologies for creating sophisticated user interfaces. XML,
Flash, ActiveX, and Java applets are just a few of the methods by which the standard palette of Web interactors
(graphics and hot spots) are widened to provide fully programmable interactive interfaces via browsers.
Once the user has submitted a request, it must be transmitted to a target Web site. This transmission may be via
HTTP or, for sensitive information such as credit card or identification numbers, HTTPS (HTTP Secure). HTTPS uses
Netscape's Secure Sockets Layer as a
subprotocol underneath HTTP. It uses a different port (443 instead of the standard port 80 that HTTP uses) to
request TCP/IP services in an encrypted form. SSL uses a 128-bit public/private key pair to encrypt the data, and
this level of encryption is considered adequate for the exchange of small amounts of commercial information in
short transactions.
PROXY SERVERS FOR PERFORMANCE
Requests from individual browsers may first arrive at a proxy server, which exists to improve the performance of
the Web-based system. These servers cache frequently accessed Web pages so that users may retrieve them
without having to access the Web site. (Caches carry out the tactic of "multiple copies.") They are typically located
close to the users, often on the same network, so they save a tremendous amount of both communication and
computation resources. Proxy servers are also used by companies that want to restrict their employees' access to
certain Web sites. In this case the proxy server is acting somewhat like a firewall.
Requests from the browser (or proxy server) then arrive at a router, located on the e-commerce provider's
network, that may include a firewall for security. (Alternately the router may pass HTTP requests on to a separate
firewall.) The router may implement network address translation (NAT), which translates an externally visible IP
address into an internal IP address. The IP address for any return traffic from the Web server is translated so that it
appears to have originated from the externally visible site, not from the internal IP address. NAT is one of the
techniques used in load balancing, as we will discuss shortly.
The purpose of the firewall is to prevent unauthorized information flows or accesses from the outside world, an
example of the "limit access" tactic. There are several types of firewall, the most common being packet filters and
application proxies. Packet filters examine the TCP and IP headers of each incoming packet and, if any bad
behavior is detected (such as an attempt to connect via an unauthorized port or to send nonconforming file types),
the packet is rejected. Packet filter firewalls are appropriate for Web-based communication because they examine
each packet in isolation—there is no attempt to maintain a history of previous communication.
Application proxy firewalls are, as their name suggests, application specific. They typically understand application
protocols and hence can filter traffic based on known patterns of behavior. An application proxy may, for example,
refuse an HTTP response unless an HTTP request was recently sent to that site. These firewalls can be much slower
than packet filter firewalls because they rely on keeping a
certain amount of history information on hand and their processing tends to be more complex.
The emergence of the Web as a distributed development environment has given rise to several new organizations
and products. For example, UDDI (Universal Description, Discovery, and Integration) provides distributed
Web-based registries of Web services. These services can be used as building blocks for distributed Web-based
applications. Figure A8 shows the ABC for the Web today.
Figure A8. The current ABC for the Web
The customers are the software server and browser providers and the service and content providers. The end
users are the people of the world. The architect's role is provided by the W3C and other consortia such as UDDI,
the Apache project, and several influential companies—Sun, Microsoft, and AOL/Netscape. The remainder of the
ABC is the same except that the technical environment now
includes the Web itself, which adds an upward compatibility requirement to the qualities.
We discussed the return cycle of the ABC .
Summary
Documenting the architecture starts with the good practices required to understandable for all
the members involved in the system development. We also discussed how the views can be
defined using UML. The visual language and architectural description languages are highly
required in terms of the quality of the system. The architecture for a system depends on the
requirements and the same holds for the documentation for an architecture . Documenting the
architecture requires different sorts of description where we require these description
languages. This should serve as an abstract and be detailed enough to serve as a blueprint.
The architectural documentation for security analysis may be different from the
architectural documentation for the developer team. Finally we have also discussed some
special topics like SOA , web services with the evolving nature of the architectural
requirements.
Exercises
1. Have you ever documented any system which you have developed as part of your project
assignments.
2. Assume that you have been part of a project which has not documented any of its evolving
phases of requirement. List the possible flaws that may have an great impact on the overall
project development.
3.What documentation would you need to do for security analysis?
Questions
1.Len Bass, Paul Clements, Rick Kazman , Software Architecture in Practice, Second Edition ,Addison Wesley
Publishers ,2003.
2.Abowd, G., Bass, L., Howard, L., Northrop, L. "Structured Modeling: An O-O Framework and
Development Process for FlightSimulators," CMU/SEI-1993-TR-14. Software Engineering Institute,
Carnegie Mellon University, 1993.
3 Abowd, G., Bass, L., Clements, P., Kazman, R., Northrop, L., Zaremski, A. "Recommended Best
Industrial Practice forSoftware Architecture Evaluation," Technical Report CMU/SEI-96-TR-025. Software
Engineering Institute, Carnegie Mellon University,1996.
4David Garlan and Mary Shaw ,“An Introduction to Software Architecture,” Advances in Software
Engineering and Knowledge Engineering, Volume I, edited by V.Ambriola and G.Tortora, World Scientific
Publishing Company, New Jersey, 1993.
5. Mario R. Barbacci ,Robert Ellison ,Anthony J. Lattanze ,Judith A. Stafford ,Charles B. Weinstock ,
William G. Wood,Quality Attribute Workshops (QAWs), Third Edition ,2003.