ADD
home
Thursday, June 18, 2009
The term software engineering first appeared in the 1968 NATO Software Engineering Conference and was meant to provoke thought regarding the current "software crisis" at the time.[2] Since then, it has continued as a profession and field of study dedicated to creating software that is of higher quality, more affordable, maintainable, and quicker to build. Since the field is still relatively young compared to its sister fields of engineering, there is still much debate around what software engineering actually is, and if it conforms to the classical definition of engineering. It has grown organically out of the limitations of viewing software as just programming. Software development is a term sometimes preferred by practitioners[who?] in the industry who view software engineering as too heavy-handed and constrictive to the malleable process of creating software.[citation needed] Although software engineering is a young profession, the field's future looks bright as Money Magazine and Salary.com rated software engineering as the best job in America in 2006. Furthermore, if you rank the number of engineers in the United States by discipline, the number of software engineers tops the list.
On any software project of typical size, problems like these are guaranteed to come up. Despite all attempts to prevent it, important details will be overlooked. This is the difference between craft and engineering. Experience can lead us in the right direction. This is craft. Experience will only take us so far into uncharted territory. Then we must take what we started with and make it better through a controlled process of refinement. This is engineering.
In software engineering, we desperately need good design at all levels. In particular, we need good top level design. The better the early design, the easier detailed design will be. Designers should use anything that helps. Structure charts, Booch diagrams, state tables, PDL, etc. -- if it helps, then use it. We must keep in mind, however, that these tools and notations are not a software design. Eventually, we have to create the real software design, and it will be in some programming language. Therefore, we should not be afraid to code our designs as we derive them. We simply must be willing to refine them as necessary.
One final point: the goal of any engineering design project is the production of some documentation. Obviously, the actual design documents are the most important, but they are not the only ones that must be produced. Someone is eventually expected to use the software. It is also likely that the system will have to be modified and enhanced at a later time. This means that auxiliary documentation is as important for a software project as it is for a hardware project. Ignoring for now users manuals, installation guides, and other documents not directly associated with the design process, there are still two important needs that must be solved with auxiliary design documents.
To summarize:
o Real software runs on computers. It is a sequence of ones and zeros that is stored on some magnetic media. It is not a program listing in C++ (or any other programming language).
o A program listing is a document that represents a software design. Compilers and linkers actually build software designs.
o Real software is incredibly cheap to build, and getting cheaper all the time as computers get faster.
o Real software is incredibly expensive to design. This is true because software is incredibly complex and because practically all the steps of a software project are part of the design process.
o Programming is a design activity -- a good software design process recognizes this and does not hesitate to code when coding makes sense.
o Coding actually makes sense more often than believed. Often the process of rendering the design in code will reveal oversights and the need for additional design effort. The earlier this occurs, the better the design will be.
o Since software is so cheap to build, formal engineering validation methods are not of much use in real world software development. It is easier and cheaper to just build the design and test it than to try to prove it.
o Testing and debugging are design activities -- they are the software equivalent of the design validation and refinement processes of other engineering disciplines. A good software design process recognizes this and does not try to short change the steps.
o There are other design activities -- call them top level design, module design, structural design, architectural design, or whatever. A good software design process recognizes this and deliberately includes the steps.
o All design activities interact. A good software design process recognizes this and allows the design to change, sometimes radically, as various design steps reveal the need.
o Many different software design notations are potentially useful -- as auxiliary documentation and as tools to help facilitate the design process. They are not a software design.
o Software development is still more a craft than an engineering discipline. This is primarily because of a lack of rigor in the critical processes of validating and improving a design.
o Ultimately, real advances in software development depend upon advances in programming techniques, which in turn mean advances in programming languages. C++ is such an advance. It has exploded in popularity because it is a mainstream programming language that directly supports better software design.
o C++ is a step in the right direction, but still more advances are needed.
it
* Adoraview Makakopy
* Allwonders Softwares
* Amazon.com
* America Online
* Ameritech
* Ameritrade Holding Corporation
* Analysts International
* Adoraview Malakopy Graphics
B
* BellSouth
* BMC Software
* Break Through Technologies
* British Telecommunications
C
* Cable & Wireless
* Cadence Design Systems
* Century Telephone Ents.
* China Telecom (Hong Kong)
* CIBER
* Cincinnati Bell
* Cisco Systems
* Compaq
* Compuware Corporation
* ComVision 2000.com
* Cotrantech.com
* CTS Corporation
* Cyrus Multi Media.com
D
* Dell Computer
* DoubleClick
E
* EarthLink Network
* Edwards (J.D.)
* Electronic Arts Online, EA Online
* EMC
* Ericsson
* Everex Technologies
* Excel Communications.
* Excite
G
* Getronics
* GTE Corporation
H
* HBO & Co.
* Hewlett-Packard
* Hong Kong Telecom
I
* IBM Corporation
* Infoseek
* Ingram Micro
* Intel Corporation
* International E-commerce
* Intermedia Communications
* Ivolga
J
* J.D. Edwards
K
* Keane
L
* Lastar Datacomm Solutions
* Lanco Global Systems
* L-3 Communications Corporation
* Lexmark International Group
* Loral Space & Communications
* Lucent Technologies
* Lycos
M
* Maxim Integrated Products
* McKesson HBOC
* Microsoft Corporation
* MindSpring Enterprises
* MISys, Manufacturing Informations System
* Mitel Corpotation
N
* National Computer Systems
* Net 400 (email Software Solution)
* Network Solutions
* Neural Soft
* Nextel Communications
* Nidec America Corporation
* Nokia
O
* Oracle Corporation
P
* Pacific Communications
* Pacific Gateway Exchange
* PanAmSat Corporation
* PeopleSoft
* Polaris Software Lab Ltd
Q
* Qualcomm
R
* RealNetworks
* RustyBrick Web Design & Web Development
S
* SAP AG, Sap Solutions
* SBC Communications
* Silex Technologies
* Singapore Telecom
* SK Web Graphic
* Softbank Technology Corp.
* Solectron
* Sony Corporation
* Sprint Communications
* Sterling Software
* Storage Tek
* Sun Microsystems
* SunGard Data Systems
* Symbol Technologies
T
* Tech Data
* Teleglobe Communications Corporation
* Tellabs
* The Internet Lost and Found
U
* Usit Technologies Private Ltd
* UserEase.com
* USIT Technologies Private Ltd
V
* Vanguard Cellular Systems
* VGL softech Ltd
* Vodafone
* Vox Vision
Y
* Yahoo!
ANTIVIRUS

About us
Thursday, May 21, 2009
GITAM UNIVERSITY
Established in 1980, Gandhi Institute of Technology and Management (GITAM) is the dream child of an inspired group of eminent intellectuals and industrialists of Andhra Pradesh, India, led by popular parliamentarian Dr. M V V S Murthi, M.A., B.L., Ph. D. to set up a world class centre of learning in technology, management and medicine in the picturesque port city of Visakhapatnam. The vision of GITAM is to become a global leader in professional education. Its mission is to impart futuristic and comprehensive education of global standards with a high sense of discipline and social relevance in a serene and invigorating environment, thereby contributing to the development of India as a Knowledge Society. The successful saga of GITAM started with its establishment of College of Engineering. Responding to the expanding needs in the fields of technology, management and medicine, GITAM College of Management Studies, GITAM Institute of Foreign Trade, GITAM Dental College, GITAM College of Science and College of Pharmacy have been added to the ever growing list of GITAM Institutions. In these twenty seven successful years GITAM has emerged as one of the most sought-after centers of professional education in the country. All the efforts of the management have culminated in the formation of GITAM UNIVERSITY on 14 August 2007. On this auspicious occasion, the institute salutes its founding fathers, the many able administrators and distinguished alumni across the continents for their yeoman contribution to its phenomenal growth.
The renowned industrialist, conscienctious parliamentarian and popular philanthropist
Dr. M V V S Murthi has been guiding the destines of GITAM with missionary zeal, first as its Founder Secretary and now as its President.
adaptable model
A software process model that can be adapted to your organization's specific project needs. The APM is intended as a basis from which a customized software process can be developed.
Adaptable Process Model - Product Description
The intent of RSP&A's Adaptable Process Model (APM) is to provide you with a software process that you can customize and adapt to local needs. The APM includes a detailed process flow implemented as a hypertext document, descriptions of many key software engineering tasks, document templates, and checklists. Acquiring the APM can significantly reduce the time required to develop your company's software process description.
Because the complete APM is provided in hypertext format within the RSP&A Web site, you and your colleagues review the complete generic process. If you think it has merit for your organization, the complete hypertext version can be acquired for an extremely reasonable price. You can then build a local website for Internet or intranet application, while at the same time making the adaptations necessary to mold the APM to local requirements. In most cases, large portions of the APM can be used as is, but in every case, you have the capability to modify terminology and process content to meet your needs and better reflect your local information technologies or engineering environment.
generic models
Process phases are distinct and separate
Evolutionary development
Process phases are interleaved
Formal systems development
A mathematical system model is formally transformed to an implementation
Reuse-based development
The system is assembled from existing components
Incremental
Wednesday, May 13, 2009
Iterative and Incremental development is a cyclic software development process developed in response to the weaknesses of the waterfall model. It starts with an initial planning and ends with deployment with the cyclic interaction in between.
The iterative and incremental development is an essential part of the Rational Unified Process, the Dynamic Systems Development Method, Extreme Programming and generally the agile software development frameworks
Overview
Incremental development is a scheduling and staging strategy, in which the various parts of the system are developed at different times or rates, and integrated as they are completed. It does not imply, require nor preclude iterative development or waterfall development - both of those are rework strategies. The alternative to incremental development is to develop the entire system with a "big bang" integration.
Iterative development is a rework scheduling strategy in which time is set aside to revise and improve parts of the system. It does not presuppose incremental development, but works very well with it. A typical difference is that the output from an increment is not necessarily subject to further refinement, and its testing or user feedback is not used as input for revising the plans or specifications of the successive increments. On the contrary, the output from an iteration is examined for modification, and especially for revising the targets of the successive iterations.[clarification needed]
The two terms were merged in practical use in the mid-1990s. The authors of the Unified Process (UP) and the Rational Unified Process (RUP) selected the term "iterative development", and "iterations" to generally mean any combination of incremental and iterative development. Most people saying "iterative" development mean that they do both incremental and iterative development. Some project teams get into trouble by doing only one and not the other without realizing it
The Basic idea
The basic idea behind iterative enhancement is to develop a software system incrementally, allowing the developer to take advantage of what was being learned during the development of earlier, incremental, deliverable versions of the system. Learning comes from both the development and use of the system, where possible. Key steps in the process were to start with a simple implementation of a subset of the software requirements and iteratively enhance the evolving sequence of versions until the full system is implemented. At each iteration, design modifications are made and new functional capabilities are added.
The Procedure itself consists of the Initialization step, the Iteration step, and the Project Control List. The initialization step creates a base version of the system. The goal for this initial implementation is to create a product to which the user can react. It should offer a sampling of the key aspects of the problem and provide a solution that is simple enough to understand and implement easily. To guide the iteration process, a project control list is created that contains a record of all tasks that need to be performed. It includes such items as new features to be implemented and areas of redesign of the existing solution. The control list is constantly being revised as a result of the analysis phase.
The iteration involves the redesign and implementation of a task from the project control list, and the analysis of the current version of the system. The goal for the design and implementation of any iteration is to be simple, straightforward, and modular, supporting redesign at that stage or as a task added to the project control list. The level of design detail is not dictated by the interactive approach. In a light-weight iterative project the code may represent the major source of documentation of the system; however, in a mission-critical iterative project a formal Software Design Document may be used. The analysis of an iteration is based upon user feedback, and the program analysis facilities available. It involves analysis of the structure, modularity, usability, reliability, efficiency, & achievement of goals. The project control list is modified in light of the analysis results
Iterative development
Iterative development slices the deliverable business value (system functionality) into iterations. In each iteration a slice of functionality is delivered through cross-discipline work, starting from the model/requirements through to the testing/deployment. The unified process groups iterations into phases: inception, elaboration, construction, and transition.
- Inception identifies project scope, risks, and requirements (functional and non-functional) at a high level but in enough detail that work can be estimated.
- Elaboration delivers a working architecture that mitigates the top risks and fulfills the non-functional requirements.
- Construction incrementally fills-in the architecture with production-ready code produced from analysis, design, implementation, and testing of the functional requirements.
- Transition delivers the system into the production operating environment.
Each of the phases may be divided into 1 or more iterations, which are usually time-boxed rather than feature-boxed. Architects and analysts work one iteration ahead of developers and testers to keep their work-product backlog full
prototyping
Introduction
In many instances the client only has a general view of what is expected from the software product. In such a scenario where there is an absence of detailed information regarding the input to the system, the processing needs and the output requirements, the prototyping model may be employed.
This model reflects an attempt to increase the flexibility of the development process by allowing the client to interact and experiment with a working representation of the product. The developmental process only continues once the client is satisfied with the functioning of the prototype. At that stage the developer determines the specifications of the client’s real needs.
Software prototyping, a possible activity during software development, is the creation of prototypes, i.e., incomplete versions of the software program being developed.
A prototype typically implements only a small subset of the features of the eventual program, and the implementation may be completely different from that of the eventual product.
The purpose of a prototype is to allow users of the software to evaluate proposals for the design of the eventual product by actually trying them out, rather than having to interpret and evaluate the design based on descriptions.
Prototyping has several benefits: The software designer and implementer can obtain feedback from the users early in the project. The client and the contractor can compare if the software made matches the software specification, according to which the software program is built. It also allows the software engineer some insight into the accuracy of initial project estimates and whether the deadlines and milestones proposed can be successfully met. The degree of completeness and the techniques used in the prototyping have been in development and debate since its proposal in the early 1970's.
This process is in contrast with the 1960s and 1970s monolithic development cycle of building the entire program first and then working out any inconsistencies between design and implementation, which led to higher software costs and poor estimates of time and cost. The monolithic approach has been dubbed the "Slaying the (software) Dragon" technique, since it assumes that the software designer and developer is a single hero who has to slay the entire dragon alone. Prototyping can also avoid the great expense and difficulty of changing a finished software product.
(TSP)
What Is Team Software Process (TSP)?
The Team Software Process (TSP), along with the Personal Software Process, helps the high-performance engineer to
- ensure quality software products
- create secure software products
- improve process management in an organization
Engineering groups use the TSP to apply integrated team concepts to the development of software-intensive systems. A four-day launch process walks teams and their managers through
- establishing goals
- defining team roles
- assessing risks
- producing a team plan
After the launch, the TSP provides a defined process framework for managing, tracking and reporting the team's progress.
Using TSP, an organization can build self-directed teams that plan and track their work, establish goals, and own their processes and plans. These can be pure software teams or integrated product teams of 3 to 20 engineers.
TSP will help your organization establish a mature and disciplined engineering practice that produces secure, reliable software. Find out how you can use TSP to strengthen your security practices.
TSP is also being used as the basis for a new measurement framework for software acquirers and developers. This effort is the Integrated Software Acquisition Metrics (ISAM) Project.
What Is Personal Software Process (PSP)?
The Personal Software Process (PSP) shows engineers how to
- manage the quality of their projects
- make commitments they can meet
- improve estimating and planning
- reduce defects in their products
Because personnel costs constitute 70 percent of the cost of software development, the skills and work habits of engineers largely determine the results of the software development process. Based on practices found in the Capability Maturity Model (CMM), the PSP can be used by engineers as a guide to a disciplined and structured approach to developing software. The PSP is a prerequisite for an organization planning to introduce the TSP.
The PSP can be applied to many parts of the software development process, including
- small-program development
- requirement definition
- document writing
- systems tests
- systems maintenance
- enhancement of large software systems
Tuesday, May 12, 2009
The following are registered trademarks of Carnegie
Mellon University.
• Capability Maturity Model®
• CMM®
The following are service marks of Carnegie Mellon
University.
• Capability Maturity Model IntegrationSM
• CMMISM
• Personal Software ProcessSM
• PSPSM
• Team Software ProcessSM
• TSPSM
Rules of Engagement
“CMM” means SW-CMM.
“CMMI” means CMMI-SE/SW.
“TSP” means the TSP and its recommended introduction
strategy, including the prerequisite PSP training for
management, engineers and relevant non-software
personnel, except where PSP is explicitly addressed.
RAD
RAD is a linear sequential software development process model that emphasis an extremely short development cycle using a component based construction approach. If the requirements are well understood and defines, and the project scope is constraint, the RAD process enables a development team to create a fully functional system with in very short time period.
RAD model has the following phases:
- Business Modeling: The information flow among business functions is defined by answering questions like what information drives the business process, what information is generated, who generates it, where does the information go, who process it and so on.
- Data Modeling: The information collected from business modeling is refined into a set of data objects (entities) that are needed to support the business. The attributes (character of each entity) are identified and the relation between these data objects (entities) is defined.
- Process Modeling: The data object defined in the data modeling phase are transformed to achieve the information flow necessary to implement a business function. Processing descriptions are created for adding, modifying, deleting or retrieving a data object.
- Application Generation: Automated tools are used to facilitate construction of the software; even they use the 4th GL techniques.
- Testing and Turn over: Many of the programming components have already been tested since RAD emphasis reuse. This reduces overall testing time. But new components must be tested and all interfaces must be fully exercised.
What are the advantages and disadvantages of RAD?
RAD reduces the development time and reusability of components help to speed up development. All functions are modularized so it is easy to work with.
For large projects RAD require highly skilled engineers in the team. Both end customer and developer should be committed to complete the system in a much abbreviated time frame. If commitment is lacking RAD will fail. RAD is based on Object Oriented approach and if it is difficult to modularize the project the RAD may not work well.
Cinoy M.R is a Computing Engineer, specializing in solution / concept selling in Information Technology, Wealth Management, as well as Stress Management.
Unified Software Development Process
Overview
The Unified Process is not simply a process, but rather an extensible framework which should be customized for specific organizations or projects. The Rational Unified Process is, similarly, a customizable framework. As a result it is often impossible to say whether a refinement of the process was derived from UP or from RUP, and so the names tend to be used interchangeably.
The name Unified Process as opposed to Rational Unified Process is generally used to describe the generic process, including those elements which are common to most refinements. The Unified Process name is also used to avoid potential issues of copyright infringement since Rational Unified Process and RUP are trademarks of IBM. The first book to describe the process was titled The Unified Software Development Process (ISBN 0-201-57169-2) and published in 1999 by Ivar Jacobson, Grady Booch and James Rumbaugh. Since then various authors unaffiliated with Rational Software have published books and articles using the name Unified Process, whereas authors affiliated with Rational Software have favored the name Rational Unified Process.
Refinements and Variations
Refinements of the Unified Process vary from each other in how they categorize the project disciplines or workflows. The Rational Unified Process defines nine disciplines: Business Modeling, Requirements, Analysis and Design, Implementation, Test, Deployment, Configuration and Change Management, Project Management, and Environment. The Enterprise Unified Process extends RUP through the addition of eight "enterprise" disciplines. Agile refinements of UP such as OpenUP/Basic and the Agile Unified Process simplify RUP by reducing the number of disciplines.
Refinements also vary in the emphasis placed on different project artifacts. Agile refinements streamline RUP by simplifying workflows and reducing the number of expected artifacts.
Refinements also vary in their specification of what happens after the Transition phase. In the Rational Unified Process the Transition phase is typically followed by a new Inception phase. In the Enterprise Unified Process the Transition phase is followed by a Production phase.
The number of Unified Process refinements and variations is countless. Organizations utilizing the Unified Process invariably incorporate their own modifications and extensions. The following is a list of some of the better known refinements and variations.
- Agile Unified Process (AUP), a lightweight variation developed by Scott W. Ambler
- Basic Unified Process (BUP), a lightweight variation developed by IBM and a precursor to OpenUP
- Enterprise Unified Process (EUP), an extension of the Rational Unified Process
- Essential Unified Process (EssUP), a lightweight variation developed by Ivar Jacobson
- Open Unified Process (OpenUP), the Eclipse Process Framework software development process
- Rational Unified Process (RUP), the IBM / Rational Software development process
- Oracle Unified Method (OUM), the Oracle development and implementation process
- Rational Unified Process-System Engineering (RUP-SE), a version of RUP tailored by Rational Software for System Engineering
spiral model
Sunday, May 10, 2009
The Spiral Model
The steps in the spiral model can be generalized as follows:
- The new system requirements are defined in as much detail as possible. This usually involves interviewing a number of users representing all the external or internal users and other aspects of the existing system.
- A preliminary design is created for the new system.
- A first prototype of the new system is constructed from the preliminary design. This is usually a scaled-down system, and represents an approximation of the characteristics of the final product.
- A second prototype is evolved by a fourfold procedure:
- evaluating the first prototype in terms of its strengths, weaknesses, and risks;
- defining the requirements of the second prototype;
- planning and designing the second prototype;
- constructing and testing the second prototype.
Applications
The spiral model is used most often in large projects. For smaller projects, the concept of agile software development is becoming a viable alternative. The US military has adopted the spiral model for its Future Combat Systems program.
Advantages
The spiral model promotes quality assurance through prototyping at each stage in systems development.
waterfall model
The waterfall model is a sequential software development process, in which progress is seen as flowing steadily downwards (like a waterfall) through the phases of Conception, Initiation, Analysis, Design (validation), Construction, Testing and maintenance.
It should be readily apparent that the waterfall development model has its origins in the manufacturing and construction industries; highly structured physical environments in which after-the-fact changes are prohibitively costly, if not impossible. Since no formal software development methodologies existed at the time, this hardware-oriented model was simply adapted for software development. Ironically, the use of the waterfall model for software development essentially ignores the 'soft' in 'software'.
The first formal description of the waterfall model is often cited to be an article published in 1970 by Winston W. Royce (1929–1995), although Royce did not use the term "waterfall" in this article. Ironically, Royce was presenting this model as an example of a flawed, non-working model (Royce 1970). This is in fact the way the term has generally been used in writing about software development—as a way to criticize a commonly used software practice.
Model
In Royce's original Waterfall model, the following phases are followed in order:
- Requirements specification
- Design
- Construction (AKA implementation or coding)
- Integration
- Testing and debugging (AKA Validation)
- Installation
- Maintenance
To follow the waterfall model, one proceeds from one phase to the next in a purely sequential manner. For example, one first completes requirements specification, which are set in stone. When the requirements are fully completed, one proceeds to design. The software in question is designed and a blueprint is drawn for implementers (coders) to follow — this design should be a plan for implementing the requirements given. When the design is fully completed, an implementation of that design is made by coders. Towards the later stages of this implementation phase, separate software components produced are combined to introduce new functionality and remove errors.
Thus the waterfall model maintains that one should move to a phase only when its preceding phase is completed and perfected. However, there are various modified waterfall models (including Royce's final model) that may include slight or major variations upon this process.
Supporting arguments
Time spent early on in software production can lead to greater economy later on in the software lifecycle; that is, it has been shown many times that a bug found in the early stages of the production lifecycle (such as requirements specification or design) is cheaper, in terms of money, effort and time, to fix than the same bug found later on in the process. ([McConnell 1996], p. 72, estimates that "a requirements defect that is left undetected until construction or maintenance will cost 50 to 200 times as much to fix as it would have cost to fix at requirements time.") To take an extreme example, if a program design turns out to be impossible to implement, it is easier to fix the design at the design stage than to realize months later, when program components are being integrated, that all the work done so far has to be scrapped because of a broken design.
This is the central idea behind Big Design Up Front (BDUF) and the waterfall model - time spent early on making sure that requirements and design are absolutely correct will save you much time and effort later. Thus, the thinking of those who follow the waterfall process goes, one should make sure that each phase is 100% complete and absolutely correct before proceeding to the next phase of program creation. Program requirements should be set in stone before design is started (otherwise work put into a design based on incorrect requirements is wasted); the program's design should be perfect before people begin work on implementing the design (otherwise they are implementing the wrong design and their work is wasted), etc.
A further argument for the waterfall model is that it places emphasis on documentation (such as requirements documents and design documents) as well as source code. In less designed and documented methodologies, should team members leave, much knowledge is lost and may be difficult for a project to recover from. Should a fully working design document be present (as is the intent of Big Design Up Front and the waterfall model) new team members or even entirely new teams should be able to familiarize themselves by reading the documents.
As well as the above, some prefer the waterfall model for its simple approach and argue that it is more disciplined. Rather than what the waterfall adherent sees as chaos, the waterfall model provides a structured approach; the model itself progresses linearly through discrete, easily understandable and explainable phases and thus is easy to understand; it also provides easily markable milestones in the development process. It is perhaps for this reason that the waterfall model is used as a beginning example of a development model in many software engineering texts and courses.
It is argued that the waterfall model and Big Design up Front in general can be suited to software projects which are stable (especially those projects with unchanging requirements, such as with shrink wrap software) and where it is possible and likely that designers will be able to fully predict problem areas of the system and produce a correct design before implementation is started. The waterfall model also requires that implementers follow the well made, complete design accurately, ensuring that the integration of the system proceeds smoothly.
Criticism
The waterfall model is argued by many to be a bad idea in practice, mainly because of their belief that it is impossible, for any non-trivial project, to get one phase of a software product's lifecycle perfected before moving on to the next phases and learning from them. For example, clients may not be aware of exactly what requirements they want before they see a working prototype and can comment upon it; they may change their requirements constantly, and program designers and implementers may have little control over this. If clients change their requirements after a design is finished, that design must be modified to accommodate the new requirements, invalidating quite a good deal of effort if overly large amounts of time have been invested in Big Design Up Front. Designers may not be aware of future implementation difficulties when writing a design for an unimplemented software product. That is, it may become clear in the implementation phase that a particular area of program functionality is extraordinarily difficult to implement. If this is the case, it is better to revise the design than to persist in using a design that was made based on faulty predictions and that does not account for the newly discovered problem areas.
Dr. Winston W. Royce, in "Managing the Development of Large Software Systems", the first paper that describes the waterfall model, also describes the simplest form as "risky and invites failure".
Steve McConnell in Code Complete (a book which criticizes the widespread use of the waterfall model) refers to design as a "wicked problem" — a problem whose requirements and limitations cannot be entirely known before completion. The implication of this is that it is impossible to perfect one phase of software development, thus it is impossible if using the waterfall model to move on to the next phase.
David Parnas, in "A Rational Design Process: How and Why to Fake It", writes:[4]
“Many of the [system's] details only become known to us as we progress in the [system's] implementation. Some of the things that we learn invalidate our design and we must backtrack.”
The idea behind the waterfall model may be "measure twice; cut once", and those opposed to the waterfall model argue that this idea tends to fall apart when the problem being measured is constantly changing due to requirement modifications and new realizations about the problem itself then.
Modified models
In response to the perceived problems with the pure waterfall model, many modified waterfall models have been introduced. These models may address some or all of the criticisms of the pure waterfall model. Many different models are covered by Steve McConnell in the "lifecycle planning" chapter of his book Rapid Development: Taming Wild Software Schedules.
While all software development models will bear some similarity to the waterfall model, as all software development models will incorporate at least some phases similar to those used within the waterfall model, this section will deal with those closest to the waterfall model. For models which apply further differences to the waterfall model, or for radically different models seek general information on the software development process.
Sashimi model
The sashimi model (so called because it features overlapping phases, like the overlapping fish of Japanese sashimi) was originated by Peter DeGrace. It is sometimes referred to as the "waterfall model with overlapping phases" or "the waterfall model with feedback". Since phases in the sashimi model overlap, information of problem spots can be acted upon during phases that would typically, in the pure waterfall model, precede others. For example, since the design and implementation phases will overlap in the sashimi model, implementation problems may be discovered during the design and implementation phase of the development process.This helps alleviate many of the problems associated with the Big Design Up Front philosophy of the waterfall model. JHGUTUKTUT
Faq
What is requirement?
A requirement describes a condition or capability to which a system must conform; either derived directly from user needs, or stated in a contract, standard, specification, or other formally imposed document. In systems engineering, a requirement can be a description of what a system must do.In other words A statement identifying a capability, physical characteristic, or quality factor that bounds a product or process need for which a solution will be pursued.
What is requirement Engineering?
Requirements Engineering is the process of establishing the services that the customer requires from the system and the constraints under which it is to be developed and operated
What are the requirement engineering processes?
- Feasibility study
- Requirements elicitation and analysis
- Requirements specification
- Requirements validation
- Requirements management
What is requirement Management?
A systematic approach to eliciting, organizing and documenting the software requirements of the system, and establishing and maintaining agreement between the customer and the project team on changes to those requirements. Effective requirements management includes maintaining a clear statement of the requirements, along with appropriate attributes and traceability to other requirements and other project artifacts.
Why Requirement Management is important?
Requirements analysis is a colossal initial step in software development. Managing changing requirements throughout the software development life cycle is the key to developing a successful solution, one that meets users' needs and is developed on time and within budget. A crucial aspect of effectively managing requirements is communicating requirements to all team members throughout the entire life cycle. In truth, requirements management benefits all project stakeholders, end users, project managers, developers, and testers by ensuring that they are continually kept apprised of requirement status and understand the impact of changing requirements specifically, to schedules, functionality, and costs.
What are the key requirement management skills?
- Analyze the Problem
- Understand Stakeholder Needs
- Define the System
- Manage the Scope of the System
- Refine the System Definition
- Manage Changing Requirements
What are the artifacts used to manage requirements?
- Vision
- Supplementary specification
- Use case specification
- Glossary
- Stake holder request
What is requirement Management plan?
Describes the requirements artifacts, requirement types, and their respective requirements attributes, specifying the information to be collected and control mechanisms to be used for measuring, reporting, and controlling changes to the product requirements
What is Requirement Implementation?
Requirements implementation is the actual work of transforming requirements into software architectural designs, detailed designs, code, and test cases.
What are requirement sources?
The term goal refers to the overall, high-level objectives of the software. Goals provide the motivation for the software, but are often vaguely formulated.
Domain knowledge: The software engineer needs to acquire, or have available, knowledge about the application domain. This enables them to infer tacit knowledge that the stakeholders do not articulate, assess the trade-offs that will be necessary between conflicting requirements, and, sometimes, to act as a “user” champion.
The operational environment: Requirements will be derived from the environment in which the software will be executed. These may be, for example, timing constraints in real-time software or interoperability constraints in an office environment. These must be actively sought out, because they can greatly affect software feasibility and cost, and restrict design choices.
The organizational environment : Software is often required to support a business process, the selection of which may be conditioned by the structure, culture, and internal politics of the organization. The software engineer needs to be sensitive to these, since, in general, new software should not force unplanned change on the business process.
What are the main types of Requirements?
- Functional Requirement
- Non Functional requirement
- User Requirement
- System Requirement
What are the different statuses of requirement?
- TBD (to be defined)- this indicates that the value of the requirement has not been defined
- TBR (to be reviewed)- this indicates that a preliminary value is available but needs further review.
- Defined - This indicates that a final value for the requirement has been obtained through analysis and trades.
- Approved- The requirement has been reviewed and approved by the appropriate authorities.
- Verified- The requirement has been verified in accordance with the verification plan.
- Deleted - The requirement is no longer applicable to the program.
What are FURPS?
Functionality -It includes feature sets ,capabilities, security
Usability -It may include such subcategories as human factors (see Concepts: User-Centered Design), aesthetics, consistency in the user interface, online and context-sensitive help, wizards and agents, user documentation, training materials
Reliability - Reliability requirements to be considered are frequency and severity of failure, recoverability, predictability, accuracy, mean time between failures (MTBF)
Performance - A performance requirement imposes conditions on functional requirements. For example, for a given action, it may specify performance parameters for: speed, efficiency, availability, accuracy, throughput, response time, recovery time, resource usage
Supportability -Supportability requirements may include testability, extensibility, adaptability, maintainability, compatibility, configurability, serviceability, installability, localizability (internationalization)
What is System Function Requirements?
These requirements specify a condition or capability that must be met or possessed by a system or its component(s). System functional requirements include functional and non-functional requirements. System functional requirements are developed to directly or indirectly satisfy user requirements.
What is non-technical requirement?
Requirements like agreements, conditions, and/or contractual terms that affect and determine the management activities of a project
What are functional Requirements?
Functional requirements capture the intended behavior of the system. This behavior may be expressed as services, tasks or functions the system is required to perform.
It specifies actions that a system must be able to perform, without taking physical constraints into consideration. Functional requirements thus specify the input and output behavior of a systems
What are non functional Requirements?
Non functional Requirements specify the qualities that the product must possess. These are things such as security, compatibility with existing systems, performance requirements, etc. In a product manufacturing example, non-functional requirements would be manufacturing requirements, or the conditions, processes, materials, and tools required to get the product from the design board to the shipping dock.
What is user interface requirement?
These are driven from Functional and Use Case Requirements, are traced from them both, depending on where they were derived from. They include items such as screen layout, tab flow, mouse and keyboard use, what controls to use for what functions (e.g. radio button, pull down list), and other “ease of use” issues.
What is emergent property requirement?
Some requirements represent emergent properties of software—that is, requirements which cannot be addressed by a single component, but which depend for their satisfaction on how all the software components interoperate. Emergent properties are crucially dependent on the system architecture.
What is navigation requirement?
These are driven and traced from the Use Case, as the Use Case lists the flow of the system, and the Navigation Requirements depict how that flow will take place. They are usually presented in a storyboard format, and should show the screen flow of each use case, and every alternate flow. Additionally, they should state what happens to the data or transaction for each step. They include the various ways to get to all screens, and an application screen map should be one of the artifacts derived in this category of requirements.
What is implementation requirement?
An implementation requirement specifies the coding or construction of a system like standards, implementation languages, operation environment
What are stable and volatile requirements?
Requirements changes occur while the requirements being elicited analyzed and validated and after the system has gone in to service
Stable requirements are concerned with the essence of a system and its application domain. They change more slowly than volatile requirements.
Volatile requirements are specific to the instantiation of the system in a particular environment and for a particular customer.
What are the different types of volatile requirements?
- Mutable requirements
- Emergent requirements
- Consequential requirements
- Compatibility requirements
What is measuring requirement?
As a practical matter, it is typically useful to have some concept of the “volume” of the requirements for a particular software product. This number is useful in evaluating the “size” of a change in requirements, in estimating the cost of a development or maintenance task, or simply for use as the denominator in other measurements. Functional Size Measurement (FSM) is a technique for evaluating the size of a body of functional requirements. What is requirement definition?
What are upgradeability requirements?
Upgradeability is our ability to cost-effectively deploy new versions of the product to customers with minimal downtime or disruption. A key feature supporting this goal is automatic download of patches and upgrade of the end-user's machine. Also, we shall use data file formats that include enough meta-data to allow us to reliably transform existing customer data during an upgrade.
What is program requirement?
These are not requirements imposed on the system or product to be delivered, but on the process to be followed by the contractor. Program requirements should be necessary, concise, attainable, complete, consistent and unambiguous. Program requirements are managed in the same manner as product requirements. Program requirements include: compliance with federal, state or local laws including environmental laws; administrative requirements such as security; customer/contractor relationship requirements such as directives to use government facilities for specific types of work such as test; and specific work directives (such as those included in Statements of Work and Contract Data Requirements Lists). Program requirements may also be imposed on a program by corporate policy or practice.
What is performance requirement?
These are quantitative requirements of system performance, and are verifiable Individually. A performance requirement is a user-oriented quality requirement that specifies a required amount of performance
What is physical requirement?
A physical requirement specifies a physical characteristic like materials, shape, size, weight a system must possess
What is quantifiable requirement?
The requirements have been grouped into “non-quantifiable requirements” and “quantifiable requirements.” Quantifiable requirements are those whose presence or absence can be quantified in a binary manner. Non-quantifiable requirements are requirements that are not quantifiable.
What is an iteration plan?
A time-sequenced set of activities and tasks, with assigned resources, containing task dependencies, for the iteration
Contact us
Contacting R.S. Pressman & Associates, Inc. | |||||
avira antivirus free download click here Thanks for visiting our Web site. Click here to contact us.
|