Just Another Right-Wing Rant

Monday, October 09, 2006

What's Required

This post is probably not interesting to many people. Only to certain specialisations of engineers, in fact. But I think it is an important document nonetheless.

One of the things we engineers are required to do, from time to time, is to write a requirements specification. For those who aren't engineers, this is a document that tells you what a particular thing is supposed to do. Everyone who hears about this for the first time always thinks, "So how hard can it be?" Well, it turns out it is very easy to write a requirements specification, but very difficult to write a good one.

I am currently working as a test engineer. What that means is that I take a requirements specification, and a system that has been built to meet the specification, and I devise ways of prooving that the system does or does not do what the specification says it does. I am, in this role, a consumer of requirements specifications. And I am fed up with what I'm seeing. There are far too many engineers who have no clue about how to write a good requirement, let alone a good requirements specification, and an almost equal number who have no idea how to take a requirement specification and design or test a system based on it.

So this post is a short(ish) guide to how to write and interpret a good requirements specification.

Writing a Specification

There are a lot of different attempts to explain what makes a good specification floating around, and there seems to be a lot of variance between them. I am going to present a selection of characteristics of a good specification. It is a not an exhaustive selection. I suspect most of the lists around reflect the pet hates of their authors, and this one is no different. For the most part, the advice in this document is the direct result of bad experiences I have had.

So, a requirements specification must be:

Clear

This is absolutely critical. When someone reads a specification, they should be left with no doubt about what is required. There is a good way to judge clearness: Ask yourself, "If I gave a subcontractor this specification and nothing else and told them to go away and build it, what is the likelihood that I would get what I wanted?" Be appropriately cynical about your subcontractor when you make this assessment, ie. assume they are idiots. They may not be, but make the assumption anyway (note that this does not form the basis of a good relationship with your subby; it is only for the purposes of the exercise).

There is a well-established dialect of the English language used in specifications, and you should stick to it strictly. This can hardly be emphasised enough. If something is a requirement, then shall is a verb that must appear in the sentence. If something is not a requirement, then shall is a verb with no place in the sentence. Requirements must all be assigned a unique identifier, and you must not renumber your requirements at any stage of the project.

There are a few ways to achieve clearness. Some important ones include:
  • Get the spec reviewed by someone who has had minimal contact with the project. If they are left with questions, your spec is not clear.
  • Use notes and non-requirement text to give context to the requirements and help your reader get in the right frame of mind. Care is needed here, because the requirements must still stand on their own. But because requirements are usually brief, and phrased in fairly formal language, it can be hard for the reader to understand them without some help. I find that I generally write every requirement twice in the spec: Each section of the spec starts with some non-requirement text that explains what the section requires in general, non-binding language. Then the requirements restate that in formal language. The formal requirements are there to be tested against; trying to test against informal language is impossible. The informal text is there to help the reader understand what's going on, how the requirements interact with each other and with the wider context of the system.

Concise

This is not so critical as clearness, but still important if the project is going to come off well: The spec must be concise. What's that mean? A few things:
  • Don't include things that reflect your opinion on the system, but aren't really requirements. This limits the creativity of your design team, who are paid to be creative and to find creative solutions. You will often prevent them from finding the best solution by imposing unnecessary requirements.
  • The phrasing of individual requirements should be as concise as possible. The more wordy they are, the more room there is for ambiguity, errors, contradictions and misunderstandings.
An interesting subset of non-concise requirements are ones such as this: "The system shall allow calibration of all equipment that can be calibrated." This is not concise; in fact it says nothing at all. If the system does not allow calibration, then the equipment can't be calibrated, and falls outside of the scope of the requirement. It is impossible to fail this requirement, and therefore it adds nothing to the specification to include it. This looks like such an obvious problem, yet this very month I have encountered two such requirements on a project; the example above is lifted almost verbatim from a specification I am testing against.

What the author meant, of course, is, "For all equipment that provides a method of calibration, the system shall make that method of calibration accessible to a user," or something like that. But that's not what he wrote, and now I have the job of explaining to the customer, not only why our equipment really does do what this requirement says, but also why the requirement doesn't really mean what it appears to say.

Consistent

A requirements spec must not contradict itself. This sounds blindingly obvious, but it's actually pretty easy for it to happen. Contradictions usually creep in where one engineer writes the initial draft of the spec, then another engineer comes along to update it with new information. He doesn't realise that another part of the spec already deals with the aspect of the project he is concerned with, and he introduces a contradiction.

Again, independent peer reviews are critical to getting this right.

Complete

This is the corollary to conciseness. A spec must cover everything you really require. Otherwise you will be delivered a system that meets your requirements specification, but does not meet your real requirements. Of particular help in this regard are the several standards out there that provide template document layouts, such as the older (now mostly obsolete) MIL-STD-498 and the newer IEEE-1220 series of standards. They provide a whole bunch of headings that requirements might fall under, and at least prompt you to think about them.

Functional

Where possible, the requirements specification should not tell the designers how to design the system. It should only tell them what the system needs to do, not how it should do it.

This can be hard, because many engineers writing requirements secretly wish they were designers instead, and have all these great ideas about how the system should be designed. They are often sure that their ideas are the best ones, and that the designers haven't really got a clue. If this is you, go get a job on a design team and stop stuffing up requirements specifications. You will make the world a better place in this way.

There are a few exceptions to this rule:
  • The requirements specification should completely specify the external interfaces of a system. These need to be expressed in concrete, design-like terms. It's no good, if your system has to have an RS-232 interface, saying, "The system shall interface to system XYZ via a TTL-compatible serial protocol with handshaking," just because you don't want to dictate the design. If it needs an RS-232 interface, the requirements should specify an RS-232 interface. If the cable for that interface has already been decided, then the requirements spec should specify which connector the interface will use, too.
  • If there is some external (ie customer, environmental etc) reason for using a particular piece of equipment in the design, then it should be included in the specification. If your customer wants Windows on Intel x86-64, it is no good putting in a requirement for a 'modern operating system on a 64 bit architecture,' because you will get Open Solaris on an AMD architecture, sure as eggs.
  • If there is a good commercial reason for limiting the design, then that should go in the requirements spec too. For instance, if you have a special deal with Intel where you get processors and chipsets at 25% of retail, then you should specify Intel processors and chipsets. Some people will complain about commercial factors dictating design, but if you don't turn a profit then you don't have a job for long.

Feasible

It must be possible to construct a system that meets the requirements. Here 'possible' is a loosely defined term. It depends a lot on how much money you spend. However, if you are building a network between Sydney and London, for instance, it is no good having a requirement that says, "The network shall have a latency of less than 1ms." At first glance it might look reasonable, but the distance from Sydney to London and back is somewhere close to 140ms at the speed of light, and the information can't travel faster than that.

Verifiable

It must be possible to prove, in a controlled test, that a requirement is met. Some examples of non-verifiable requirements:
  • "The system's processing load shall never exceed 40% of available processing capacity." This is impossible to demonstrate in finite time, because it always might go over 40% just after the demonstration finishes. It might, in certain circumstances, be possible to prove this by analysing the code run on the processor, and showing that it will never consume more than 40% of the available capacity. However, in almost all systems the complexity of the code running prevents this. A better requirement: "During a ten-minute demonstration run of the scenario described in Annex A, the system's average processing load shall not exceed 40% of the available processing capacity." This demonstrates another couple of important points: You should already be thinking about how to test the system as your write the spec, and you should not be afraid of putting supporting data in annexes to the spec.
  • "The system shall allow the temporary fitment of any test equipment required for developing future modifications to the system." Another example lifted almost verbatim from a project I am working on. How do I know what equipment might be required for future modifications? How do I know what modifications might be considered in future? Do you really want me to go buy a specimen of every piece of test kit available in the world and make sure we can fit it? This is actually an example of a real requirement that is quite difficult to express in a verifiable way; a requirement for a certain type of test point, or something of that nature might be more useful.
  • There are plenty more of these; google for "unverifiable requirement example" and you'll get a selection.

Interpreting a Specification

Another tendency I have noticed among test engineers is a certain degree of cluelessness when it comes to interpreting requirements. This is particularly so when there is a badly written spec which has ambiguities. Often I will state the plain meaning of a requirement, only to have some git reply, "Ah, that's what it appears to mean, but you could interpret it to mean..." Fill in your own pointless, stupid, senseless interpretation here. I feel like responding, "Yes, you, could, if you were a retarded beetle, but this is the real world." That's no way to treat a colleague, but it's what I feel like saying.

It's late now, and I don't want to put as much effort into this section, but here are a few tips on interpreting requirements:
  • If there are two possible interpretations of a requirement, and one of them makes no sense, then the one that makes sense is the one to choose. The ambiguity is a defect in the spec, but that's no reason for you to compound it by choosing perverse interpretations.
  • The scope of a requirement should be limited by what makes sense. For instance, if a requirement states that, "The system shall provide calibration points for all COTS items in the design," then that should be limited to those COTS items that actually have some aspect that can be calibrated. The requirement is badly written, but that is no reason to require a mountain of work of the design team to provide "calibration" points for their power supplies, because that was the only candidate you could think of for calibration points.
  • Where a requirement is ambiguous and neither of the above principles applies, the obvious interpretation should be the one chosen. This is just a logical extension of the previous points.
  • The meanings of words should always be controlled by the context of the project, and especially by the surrounding context of the requirements specification. I go wild when a spec contains an ambiguous requirement, and a note explaining how to interpret the requirement to resolve the ambiguity, and some loser who just wants to make work for themselves says, "Ah yes, it says that in the document, but it's not a part of the requirement, so you shouldn't take too much notice of it." Sigh.
  • It can be a useful exercise to analyse a requirement in detail. This can seem pedantic, but often helps you see what you need to demonstrate in your test. For instance, the requirement, "The system shall provide calibration points for all analog signals in the system's external interface," can be broken down into several points:
    • The requirement's scope includes only signals in the system's external interface.
    • The requirement's scope is limited to analog signals.
    • The system must provide calibration points for those signals that fall within the scope of the requirement. This is interpreted to mean that a suitably qualified user with appropriate test equipment must be able to measure the value of the signal as defined by the relevant interface specification to an accuracy sufficient to satisfy the calibration specification for the equipment.
    Thus you need to decide what signals fall in the scope of the requirement, using the criteria listed above, then show that for each one you can identify the value of the signal from the interface spec, and what accuracy is required by the calibration spec, then show that you can actually measure that value to that accuracy.
That's enough for now. I'm sure there is other stuff, but I've at least worked off some steam.

0 Comments:

Post a Comment

<< Home