Facit SE exam 18-12-2000

References to Sommerville are to the 6th edition.
  1. a) See for instance Fig 3.5 and 14.7. Some points to note:

    b) (See Fig 14.2). Many say that "bad components" (bad documentation, ineffective) are a problem, but of course everything is a problem if done badly. Even with good components, problems can arise: maintaining a component library, finding components in it, engineers who are not happy with the components and want to do better (a significant problem, seeing how many complain about components being badly written), lack of tool support, and harder maintenance.

  2. Note that this question is about the validation/verification of the requirements specification, i.e., not of the final product. So validation is w.r.t. the stakeholder, verification is w.r.t. the requirements specification itself (internal consistency) and maybe the requirements definition.

    a) the stakeholders inspect the requirements specification; visualisation (e.g. use cases), show a prototype
    b) inspection (check for inconsistentcy), formalisation, trace back to req. def.

  3. OBS: system models are models of the product, not the process.

    The general purposes are to make various (simple) abstractions of the (complex) system, to visualise the system or parts of it, and to have something to refer to in the requirements specification.

    Making formal models forces you to specify the system fully and unambiguously, in particular how it behaves in more exceptional cases. They can be used for formal verification. Most formalisms nowadays come with visualisation aids and/or prototyping facilities. So you can use also formal models in communication with the stakeholders, even if you don't want to show the formal part itself.

  4. See Section 11.2 of Sommerville. Some issues:

  5. Say a system sometimes causes a (safety critical) error if it is run with not enough memory available. We put a small test in it, that checks the available memory, and refuses to run the system when not enough memory is available. This increases safety, but reduces availability.

    Another example: a money changer. The safety risk is that it accepts false money. So we make it safer by making it refuse money more often. Refusing valid money can be considered a failure, so ROCOF goes up.

    There are of course many different examples. They all follow the pattern: safety = making sure that nothing bad happens. Therefore, when in doubt, make sure that nothing happens at all. But if nothing happens, the system is not available (AVAIL), or stops in the middle of something (MTTF/ROCOF).

    It's not so good to say that making the system safer involves adding safety tests, which makes the system more complex, and therefore less reliable. More complex systems are less safe too. So making a system safer often means keeping it simple.

  6. In statistical testing, the probability of finding an error is proportional to the probability of the error occurring when using the system, i.e. very low. So it would take extremly many tests to determine that the system is highly reliable.

    Furthermore, statistical testing depends on a user profile. But the system must be safe even when users deviate from their normal behaviour. Such deviation may be intentional (hackers trying to break the system) or not (panic makes people do the strangest things).

  7. a) Take the following steps.
    1. Draw a flow-chart of the program.

    2. Find one or more paths through the program that go through all the nodes (node coverage), resp. all the arrows (branch coverage). In this case, node coverage needs only one path: 1, 2, 3, 4, 3, 5. Branch coverage needs a second path to cover the branch from 1 to 3, say 1, 3, 5.

    3. This is a point that many of you omitted: find test data that makes the program follow these paths. The first path is followed when b<0 (and a can be anything). The second path is followed when b=0. For branch coverage, we could equally well choose the paths 1, 2, 3, 5 and 1, 3, 4, 3, 5. However, the path 1, 2, 3, 5 is infeasible: there is no test data that causes the program to follow this path.

    b) The only variable that affects control is b. So coverage testing will use a "typical value" for a. The error occurs for a very specific value of a (division by 0 if a=0 and b<0).

    c) See 20.1.2. It is important to note that boundary testing is a black box method. It doesn't know the code. What it does is look at boundary values based on the specification. The exponential function looks rather different for a>0 compared to a<0, and is partly undefined when a=0. Even for the integer b, the equivalence classes >0, =0 and <0 make sense. You can also test for very high and low values of a and b. So a test with a=0, b<0 is almost certainly in the boundary test suite.

    d) Division by 0 is a common error on the inspection checklist.

  8. See section 23.1. Productivity is production per time unit, so the real question is how to measure the size of what is produced. Of course you can measure lines of code (LOC), but that has many disadvantages. A better measure is function points. When reusing components, it's even harder to find a good measure.

  9. The quality standards assume a very direct relation between process and product quality. Indeed, they almost assume that the only way to produce better products is by improving the process. This is true in manufacturing industry, where ususally a machine carries out the process and makes many copies of the same simple product. However, in SE people follow a process to make always different, complex products. The quality of the manufactured product is often easy to measure, while the quality of software is harder to measure.

  10. These people can give an outside view, come with fresh ideas, avoid groupthink. They also force the team to be clear and explicit, and for example to keep the documentation up-to-date.

    When more people are needed in the project, they need only short training. (Part of the MMM is still valid if the team grows, but they may be needed to replace other team members who fall ill or quit.) Finally, having people from other projects around quickens the transfer of (domain) knowledge between projects.

  11. no longer valid: a Silver Bullet must attack all parts of the process, not just one, in order to give an order-of-magnitude improvement. There are no more Silver Bullets that attack the accident of Software Engineering, a new Bullet must attack the essence. Reuse attacks essence: let someone else do it. Of course, we must not limit ourselves to reuse of code. The most improtant form of reuse is COTS software. Making the product is still expensive, but if millions of people share the cost, it gets cheap.

    still valid: you still have to decide what are the requirements, and make a design. That is the essence of SE. Reuse just attacks the accidental difficulty of expressing the design in code. And you must work to find suitable components. The integration test and maintenance (adding functionlity) might become even harder. So you can never achieve an order of magnitude improvement by reuse.

    So the different standpoints differ in the perception of the problem:
    software cost versus the cost/time of running a SE project.
    Looking at all reuse, including COTS, versus only component reuse.