Home » SolvedQuestions » Kerala University S8 CS Solved Questions of Software Engineering

Kerala University S8 CS Solved Questions of Software Engineering

Kerala University S8 CS Solved Questions of Software Engineering &Project Management April-2012



Answers of Kerala University S8 CS Solved Questions of Software Engineering


1.How is Cyclomatic complexity calculated?

There are three different ways to compute the cyclomatic complexity. The answers computed by the three methods are guaranteed to agree.

Method 1:

Given a control flow graph G of a program, the cyclomatic complexity V(G) can be computed as:

V(G) = E – N + 2

where N is the number of nodes of the control flow graph and E is the number of edges in the control flow graph.

Method 2:

An alternative way of computing the cyclomatic complexity of a program from an inspection of its control flow graph is as follows:

V(G) = Total number of bounded areas + 1

In the program’s control flow graph G, any region enclosed by nodes and edges can be called as a bounded area. This is an easy way to determine the McCabe’s cyclomatic complexity.

Method 3:

The cyclomatic complexity of a program can also be easily computed by computing the number of decision statements of the program. If N is the number of decision statement of a program, then the McCabe’s metric is equal to N+1.

2.What is an Object point.Explain how it is useful in estimating project effort?

Object points is a new software size metric that has emerged to cope with recent developments in software and to overcome the deficiencies of the traditional lines of code and function point size metrics.Moreover ,object points has been utilized as the basis for several software cost estimation models with promising improvements in the accuracy of estimates.However the infancy of the object point size metric means that there is a shortage of object point based software historical projects,on which to base the empirical validation of the new object point based software cost estimation models .Hence the relationship between the extensively used function points and newly invented object point size matrics have been conceptualized and utilized in a novel forward approach to convert the function points data into their equivalent object point data.Empirical investigations of 66 function point projects have shown high correlation and significance,88% and .33respectively.between the resulting object point effort estimates and the actual function points effort.Furthermore,the resulting object point data have been utilized to model the embodied function point objects points relationship in two specialized productivity factors and function points type dependent linear models.The resulting models have shown high fitness, values of 0.95 for both models.

3.What are the types of Risks encountered during software development.Explain how these risks are estimated?

Project risks. Project risks concern varies forms of budgetary, schedule,

personnel, resource, and customer-related problems. An important project risk is

schedule slippage.

Technical risks. Technical risks concern potential design, implementation,

interfacing, testing, and maintenance problems. Technical risks also include

ambiguous specification, incomplete specification, changing specification,

technical uncertainty, and technical obsolescence.

Business risks. This type of risks include risks of building an excellent product

that no one wants, losing budgetary or personnel commitments, etc.

The objective of risk assessment is to rank the risks in terms of their damage causing potential. For risk assessment, first each risk should be rated in two ways:

  • The likelihood of a risk coming true (denoted as r).
  • The consequence of the problems associated with that risk (denoted as s).

Based on these two factors, the priority of each risk can be computed:

p = r * s

Where, p is the priority with which the risk must be handled, r is the probability of the risk becoming true, and s is the severity of damage caused due to the risk becoming true. If all identified risks are prioritized, then the most likely and damaging risks can be handled first and more comprehensive risk abatement procedures can be designed for these risks.

Avoid the risk: This may take several forms such as discussing with the

customer to change the requirements to reduce the scope of the work, giving

incentives to the engineers to avoid the risk of manpower turnover, etc.

Transfer the risk: This strategy involves getting the risky component developed

by a third party, buying insurance cover, etc.

Risk reduction: This involves planning ways to contain the damage due to a

risk. For example, if there is risk that some key personnel might leave, new

recruitment may be planned.

4.Give differences between white box and Black box testing?

In the black-box testing, test cases are designed from an examination of the input/output values only and no knowledge of design, or code is required. The following are the two main approaches to designing black box test cases.

  • Equivalence class portioning
  • Boundary value analysis

Equivalence Class Partitioning

In this approach, the domain of input values to a program is partitioned into a set of equivalence classes.

Boundary Value Analysis

A type of programming error frequently occurs at the boundaries of different equivalence classes of inputs.

One white-box testing strategy is said to be stronger than another strategy, if all

types of errors detected by the first testing strategy is also detected by the second testing strategy, and the second testing strategy additionally detects some more types of errors. When two testing strategies detect errors that are different at least with respect to some types of errors, then they are called complementary.

5.what is Data Dictionary?What is its use in system design?

A data dictionary lists all data items appearing in the DFD model of a system. The data items listed include all data flows and the contents of all data stores appearing on the DFDs in the DFD model of a system. A data dictionary lists the purpose of all data items and the definition of all composite data items in terms of their component data items. For example, a data dictionary entry may represent that the data grossPay consists of the components regularPay and overtimePay.

grossPay = regularPay + overtimePay

For the smallest units of data items, the data dictionary lists their name and their type. Composite data items can be defined in terms of primitive data items using the following data definition operators:

+:         denotes composition of two data items, e.g. a+b represents data a and b.

[,,]:       represents selection, i.e. any one of the data items listed in the brackets can occur. For example, [a,b] represents either a occurs or b occurs.

():         the contents inside the bracket represent optional data which may or may not appear. e.g.   a+(b) represents either a occurs or a+b occurs.

{}:        represents iterative data definition, e.g. {name}5 represents five name data. {name}*  represents zero or more instances of name data.

=:         represents equivalence, e.g. a=b+c means that a represents b andc.

/* */:    Anything appear

6.Give an example of a design fault that leads to failure.

Fault means a cause of the error (mistake in coding)



7.Define Software Quality Assurance.How does ISO 9000 standard used in software Quality Assessment?

The SQA team’s objective is to ensure that the product does not deviate far from the original design specifications. If it is discovered that deviation has occurred,the SQA team will notify the development team to prevent future deviations and to correct the previous deviations. Also, the SQA team will perform a walkthrough to analyze the product’s quality at any particular stage of


Confidence of customers in an organization increases when organization qualifies for ISO certification. This is especially true in the international market. In fact, many organizations awarding international software development contracts insist that the development organization have ISO 9000 certification. For this reason, it is vital for software organizations involved in software export to obtain ISO 9000 certification.

  • ISO 9000 requires a well-documented software production process to be in place. A well-documented software production process contributes to repeatable and higher quality of the developed software.
  • ISO 9000 makes the development process focused, efficient, and costeffective.
  • ISO 9000 certification points out the weak points of an organization and recommends remedial action.
  • ISO 9000 sets the basic framework for the development of an optimal process and Total Quality Management (TQM).


8.what is meant by software change?

It is impossible to produce systems of any size which do not need to be changed. Once software is put into use, new requirements emerge and existing requirements change as the business running that software changes. Parts of the software may have to be modified to correct errors that are found in operation, improve its performance or other non-functional characteristics. All of this means that, after delivery, software systems always evolve in response to demands for change. Software change is very important because organisations are now completely dependent on their software systems and have invested millions of dollars in these systems.

There are a number of different strategies for software change (Warren, 1998):

  1. Software maintenance Changes to the software are made in response to changed requirements but the fundamental structure of the software remains stable. This is the most common approach used to system change.
  2. Architectural transformation This is a more radical approach to software change then maintenance as it involves making significant changes to the architecture of the software system. Most commonly, systems evolve from a centralised, datacentric architecture to a client-server architecture.
  3. Software re-engineering This is different from other strategies in that no new functionality is added to the system. Rather, the system is modified to make it easier to understand and change. System re-engineering may involve some structural modifications but does not usually involve major architectural change.

9.Bring out the necessity of architectural design in software project management?

An architectural design is of crucial importants in software engineering during which the essentialrequirments like reliability,cost,and performance are dealt with.this task is cumbersome as the software engineering paradigm is shifting from monolithic,standalone,built-from-scratch system to componentized,platform based,standareds-based,and so on.

Though the architectural design is the responsibility of developers,some other people like user representatives,systems engineers,hardware engineers operations and  personnel are also involved.all these stakeholders must also be consulted while reviewing the architectural design in order to minimize the risk and errors.

10.Why is user interface design important in software development life cycle?

User interface is the front end application view to which user interacts in order to use the software .user can manipulate and control the software as well as hardware by means of UI.UI is a part of software  and is designed such a way that it is expected to provide the user insight of the software.UI provides fundamental platform for human computer interaction.UI can be Graphical,text based,audio-video based,depending upon the underlying h/w and s/w combination of both.the software becomes more popular if its UI is :

  • Attractive
  • Simple to use
  • Responsive in short time
  • Clear to understand
  • Consistent on all interfacing screens.

UI is broadly divided in to two categories:

  • Command Line Interface
  • Graphical User Interface

11.i)what is CMM?Explain different levels and area of CMMI.      (12)

SEI Capability Maturity Model (SEI CMM) helped organizations to improve the quality of the software they develop and therefore adoption of SEI CMM model has significant business benefits.SEI CMM can be used two ways: capability evaluation and software  process assessment. Capability evaluation and software process assessment differ in motivation, objective, and the final use of the result. Capability evaluation provides a way to assess the software process capability of an organization. The results of capability evaluation indicates the likely contractor performance if the contractor is awarded a work. Therefore, the results of software process capability assessment can be used to select a contractor. On the other hand,

software process assessment is used by an organization with the objective to improve its process capability. Thus, this type of assessment is for purely internal use.SEI CMM classifies software development industries into the following five maturity levels. The different levels of SEI CMM have been designed so that it is easy for an organization to slowly build its quality system starting from scratch.

Level 1: Initial. A software development organization at this level is characterized by ad hoc activities. Very few or no processes are defined and followed. Since software production processes are not defined,different engineers follow their own process and as a result development efforts become chaotic. Therefore, it is also called chaotic level. The success of projects depends on individual efforts and heroics.

Level 2: Repeatable. At this level, the basic project management practices such as tracking cost and schedule are established. Size and cost estimation techniques like function point analysis, COCOMO, etc. are used.

Level 3: Defined. At this level the processes for both management and development activities are defined and documented. There is a common organization-wide understanding of activities, roles, and responsibilities.

Level 4: Managed. At this level, the focus is on software metrics. Two types of metrics are collected. Product metrics measure the characteristics of the product being developed, such as its size, reliability, time complexity, understandability, etc.

Level 5: Optimizing. At this stage, process and product metrics are collected. Process and product measurement data are analyzed for continuous process improvement. For example, if from an analysis of the process measurement results, it was found that the code reviews were not

very effective and a large number of errors were detected only during the unit testing, then the process may be fine tuned to make the review more effective.


    ii)compare CMM with ISO 9000 standared?

  • ISO 9000 is awarded by an international standards body. Therefore, ISO 9000 certification can be quoted by an organization in official documents,communication with external parties, and the tender quotations. However, SEI CMM assessment is purely for internal use.


  • SEI CMM was developed specifically for software industry and therefore addresses many issues which are specific to software industry alone.


  • SEI CMM goes beyond quality assurance and prepares an organization to ultimately achieve Total Quality Management (TQM). In fact, ISO 9001 aims at level 3 of SEI CMM model.


  • SEI CMM model provides a list of key process areas (KPAs) on which an organization at any maturity level needs to concentrate to take it from one maturity level to the next. Thus, it provides a way for achieving gradual quality improvement.


12.i)Compare and contrast Waterfall and RAD model for software development?(10)

The classical waterfall model can be considered as the basic model and all other life cycle models as embellishments of this model. However, the classical waterfall model can not be used in practical development projects, since this model supports no mechanism to handle the errors committed during any of the phases.This problem is overcome in the iterative waterfall model. The iterative waterfall model is probably the most widely used software development model

evolved so far. This model is simple to understand and use. However, this model is suitable only for well-understood problems; it is not suitable for very large projects and for projects that are subject to many risks.

            The rapid application model emphasizes on delivering projects in small pieces.if the project is large it is divided into a series of smaller project.Each of these smaller projects is planned and delivered individually.Thus with a series of smaller projects,the final project is delivered quickly and in a less structured manner.the major characteristics of the RAD model is that it focuses on the reuse of code,processes,templates and tools.


ii)Discuss three different techniques used for requirement elicitation of a software?(10)

  1. interviews:

these are conventional ways of eliciting requirements,which help software engineers,users,and the software development team to understand the problem and suggest solutions for it.for this,the software engineer interviews the users with a series of questions.when an interview is conducted,rules are established for users and  other shareholders.In adiition an agenda is preapared beforeconducting interviews.

  1. scenarios:

These are descriptions of a sequence of events,which help to determine possible future outcomes before software is developed or implemented..scenarios are used to  test whether the software will accomplish user requirements.it also helps to provide a framework for questions to software engineers about user’s task.

  1. Questionnaire:

Questionnaire is an effective tool of gathering requirements and produces a written document.the major advantage of questionnaire is that it require less effort and gathers information from many users in a very short time.in this method preparing right and effective questionnaires is the critical issue.

13.i)Distinguish between integration testing and system testing(10)

During integration testing, different modules of a system are integrated in a planned manner using an integration plan. The integration plan specifies the steps and the order in which modules are combined to realize the full system.After each integration step, the partially integrated system is tested.

There are four types of integration testing approaches. Any one (or a mixture) of the following approaches can be used to develop the integration test plan. Those approaches are the following:

  • Big bang approach
  • Top-down approach
  • Bottom-up approach
  • Mixed-approach

Big-Bang Integration Testing

It is the simplest integration testing approach, where all the modules making up a system are integrated in a single step. In simple words, all the modules of the system are simply put together and tested.


Bottom-Up Integration Testing

In bottom-up testing, each subsystem is tested separately and then the full system is tested. A subsystem might consist of many modules which communicate among each other through well-defined interfaces.

Top-Down Integration Testing

Top-down integration testing starts with the main routine and one or two subordinate routines in the system. After the top-level ‘skeleton’ has been tested,the immediately subroutines of the ‘skeleton’ are combined with it and tested.

Mixed Integration Testing

A mixed (also called sandwiched) integration testing follows a combination of topdown and bottom-up testing approaches. In top-down approach, testing can start only after the top-level modules have been coded and unit tested.


when all the modules have been successfully integrated and tested, system testing is carried out. The goal of system testing is to ensure that the developed system conforms to its requirements laid out in the SRS document.System testing usually consists of three different kinds of testing activities:

α – testing: It is the system testing performed by the development team.

β – testing: It is the system testing performed by a friendly set of customers.

acceptance testing: It is the system testing performed by the customer himself after the product delivery to determine

whether to accept or reject the delivered product.System testing is normally carried out in a planned manner according to the system test plan document. The system test plan identifies all testing related activities that must be performed, specifies the schedule of testing,and allocates resources. It also lists all the test cases and the expected outputs for each test case.


ii)Why is Maintenance important in SDLC.what are the different types of Maintenance (10)

Maintenance of a typical software product requires much more than the effort necessary to develop the product itself. Many studies carried out in the past confirm this and indicate that the relative effort of development of a typical software product to its maintenance effort is roughly in the 40:60 ratio. Maintenance involves performing any one or more of the following three kinds of activities:

Correcting errors that were not discovered during the product development phase. This is called corrective maintenance.

Improving the implementation of the system, and enhancing the functionalities of the system according to the customer’s requirements. This is called perfective maintenance.

Porting the software to work in a new environment. For example,porting may be required to get the software to work on a new computer platform or with a new operating system. This is called adaptive maintenance.

14.i)What are the different design heuristics for effective modularity?(10)

ii)Explain cohesion and coupling with necessary diagrams?why are those concepts important in system design?(10)

Cohesion is a measure of functional strength of a module. A module having high cohesion and low coupling is said to be functionally independent of other modules.


Coincidental cohesion: A module is said to have coincidental cohesion,if it performs a set of tasks that relate to each other very loosely, if at all. In this case, the module contains a random collection of functions.

Logical cohesion: A module is said to be logically cohesive, if all elements of the module perform similar operations, e.g. error handling, data input, data output, etc.

Temporal cohesion: When a module contains functions that are related by the fact that all the functions must be executed in the same time span,the module is said to exhibit temporal cohesion.

Procedural cohesion: A module is said to possess procedural cohesion,if the set of functions of the module are all part of a procedure (algorithm)in which certain sequence of steps have to be carried out for achieving an objective, e.g. the algorithm for decoding a message.

Communicational cohesion: A module is said to have communicational cohesion, if all functions of the module refer to or update the same data structure, e.g. the set of functions defined on an array or a stack.

Sequential cohesion: A module is said to possess sequential cohesion, if the elements of a module form the parts of sequence, where the output from one element of the sequence is input to the next.

Functional cohesion: Functional cohesion is said to exist, if different elements of a module cooperate to achieve a single function.


Coupling between two modules is a measure of the degree of interdependence or interaction between the two modules


Data coupling: Two modules are data coupled, if they communicate through a parameter. An example is an elementary data item passed as a parameter between two modules, e.g. an integer, a float, a character, etc.

Stamp coupling: Two modules are stamp coupled, if they communicate using a composite data item such as a record in PASCAL or a structure in C.

Control coupling: Control coupling exists between two modules, if data from one module is used to direct the order of instructions execution in another. An example of control coupling is a flag set in one module and tested in another module.

Common coupling: Two modules are common coupled, if they share data through some global data items.

Content coupling: Content coupling exists between two modules, if they share code, e.g. a branch from one module into another module.


15.i)How changes are controlled in software engineering projects?(12)

ii)Explain the basic principles of software project scheduling?(8)

Project-task scheduling is an important project planning activity. It involves deciding which tasks would be taken up when. In order to schedule the project activities, a software project manager needs to do the following:

  1. Identify all the tasks needed to complete the project.
  2. Break down large tasks into small activities.
  3. Determine the dependency among different activities.
  4. Establish the most likely estimates for the time durations necessary to complete the activities.
  5. Allocate resources to activities.
  6. Plan the starting and ending dates for various activities.
  7. Determine the critical path. A critical path is the chain of activities that determines the duration of the project.

The first step in scheduling a software project involves identifying all the tasks necessary to complete the project. A good knowledge of the intricacies of the project and the development process helps the managers to effectively identify the important tasks of the project. Next, the large tasks are broken down into a logical set of small activities which would be assigned to different engineers. The work breakdown structure formalism helps the manager to breakdown the tasks systematically.

After the project manager has broken down the tasks and created the

work breakdown structure, he has to find the dependency among the activities.Dependency among the different activities determines the order in which the different activities would be carried out. If an activity A requires the results of another activity B, then activity A must be scheduled after activity B.

In general,the task dependencies define a partial ordering among tasks, Once the activity network representation has been worked out, resources are allocated to each activity. Resource allocation is typically done using a Gantt chart. After resource allocation is done, a PERT chart representation is developed. The PERT chart representation is suitable for program monitoring and control. For task scheduling, the project manager needs to decompose the project tasks into a set of activities. The time frame when each activity is to be performed is to be determined. The end of each activity is called milestone. The project manager tracks the progress of a project by monitoring the timely completion of the milestones. If he observes that the milestones start getting delayed, then he has to carefully control the activities, so that the overall deadline can still be met.

16.i)How does CASE tools help software engineers in software development?List any 5 CASE tools with their specific application?(12)

A CASE (Computer Aided Software Engineering) tool is a generic term used to denote any form of automated support for software engineering. In a more restrictive sense, a CASE tool means any tool used to automate some activity associated with software development. Many CASE tools are available. Some of these CASE tools assist in phase related tasks such as specification, structured analysis, design, coding, testing, etc.; and others to non-phase activities such as

project management and configuration management.

CASE tools are classified into


1.UPPER CASE  tools

They support and analysis and the design phase.they include tools for analysis modeling,reports and forms generation. The most popular classification of CASE technology and tools is based on the distinction made between the early and late stages of systems development. Many of the current CASE tools deal with the management of the system specification only by supporting strategy, planning and the construction of the conceptual level of the enterprise model. These tools are often termed upperCASE tools because they assist the designer only at the early stages of system development and ignore the actual implementation of the system. The emphasis in upperCASE is to describe the mission, objectives, strategies, operational plans, resources, component parts etc. of the enterprise and provide automated support for defining the logical level of the business, its information needs and designing information systems to meet these needs.

2.LOWER CASE tools

They support the coding phase configuration management. Other CASE tools deal with the application development itself with regard to the efficient generation of code. These are termed lowerCASE tools because they assist the developer at the stage of system generation and ignore the early stages of system requirements specification. The starting point of the system development with lowerCASE tools is the conceptual model of the information system. The conceptual modelling formalism is usually, based on formal foundations in order to allow for automatic mapping to executable specifications and efficient validation and verification of the system specification itself.


It is known as the I-CASE and also supports analysis design and coding phases. Central to the issue of CASE integration, is the concept of an Integrated Software Development Environment (ISDE). ISDE as the term implies, provides support for the coordination of all the different activities that take place during a software project. There are different types of support that an ISDE can provide.


ii)What are the building blocks of CASE tools(8)


The building blocks for CASE are illustrated in figure. Each building block forms a foundation for the next, with tools sitting at the top of the heap. It is interesting to note that the foundation for effective CASE environments has relatively little to do with software engineering tools themselves. Rather, successful environments for software engineering are built on an environment architecture that encompasses appropriate hardware and systems software. In addition, the environment architecture must consider the human work patterns that are applied during the software engineering process.

The environment architecture, composed of the hardware platform and system support (including networking software, database management, and object management services), lays the ground work for CASE. But the CASE environment itself demands other building blocks. A set of portability services provides a bridge between CASE tools and their integration framework and the environment architecture. The integration framework is a collection of specialized programs that enables individual CASE tools to communicate with one another, to create a project database, and to exhibit the same look and feel to the end-user (the software engineer). Portability services allow CASE tools and their integration framework to migrate across different hardware platforms and operating systems without significant adaptive maintenance.
The building blocks depicted in figure above represent a comprehensive foundation for the integration of CASE tools. However, most CASE tools in use today have not been constructed using all these building blocks. In fact, some CASE tools remain “point solutions.” That is, a tool is used to assist in a particular software engineering activity (e.g., analysis modeling) but does not directly communicate with other tools, is not tied into a project database, is not part of an integrated CASE environment (ICASE). Although this situation is not ideal, a CASE tool can be used quite effectively, even if it is a point solution.


6,851 total views, 1 views today

Leave a Reply


Check Also

CUSAT B.Tech S3 CS Solved Question for OOPS

CUSAT B.Tech S3 CS Question for OOPS (Object Oriented Programming)Nov-2014 Solution of ...

Kerala University B.Tech S8 Question Answers of Cryptography Apr-2012

 Kerala University B.Tech S8 Question Answers of Cryptography And Network Security Apr-2012 ...