31. lokakuuta 2017

Continuous Requirements Engineering part 2/2

This is the second part of a two part blog by Bogdan Bereza. Read the first part here >

Test analysis helps the requirements process

Testers have complained for a long time that testing starts too late in projects. It is much more than just a testing problem, though, because it means, that huge potential, which test analysis and design have for radically improving requirements, is not used.

Asking uncomfortable question


By asking uncomfortable questions, which testers should remember to do even after midnight , unless they want to be co-dependent , even high-level, vision-related requirements can sometimes be contested. “Why on earth would anyone ever want to do this?” – an exasperated tester may ask and, being the first person to really use an implemented system, discover that some assumed system goals are impossible and wrong.

Re-defining stakeholders


Test execution – as our exploratory colleagues often righty stress – is a perfect opportunity for designing more test cases. This of course entails asking many “what if” questions. What if an unauthorized person gains access to this data? What if the receiving system is down? What is an infant presses this button not once, but many times? Both test design and test execution – including test result evaluation – provide ample opportunities for discoveries, for re-defining stakeholders, system and system context boundaries.

Exploratory requirements elicitation


What is misleadingly called "exploratory testing", is really exploratory requirements engineering plus requirements-based testing.

Crazy idea? Not at all. Initially, exploratory testing was invented to cope with situation where requirements were poor, or missing altogether. The main techniques promoted by exploratory testing are creative guesswork concerning what the tested software should do, finding out stakeholders’ real priorities, finding quality attributes most important in a given context, where testing takes place. It is requirements elicitation on the fly, followed by a fast analysis in order to do best testing possible – isn’t it?

Well, it is. Exploratory approach is really great, when there are no requirements, or they are not trustworthy, or not complete.

The clinching evidence is HTSM (Heuristic Test Strategy Model), created by James Bach to help exploratory testers choose the right strategy fast and correctly. Like its more comprehensive predecessors, ISO 9126, FURPS and ISO/IEC 25010:2011, HTSM is first of all a list of quality attributes – or requirements types - to be taken into account. Of course, this is necessary, for an exploratory requirements process.

Discovering forgotten requirements


Designing test cases, additional “what if?” questions are asked. To some of them, there is no answer ready – you have to go and ask someone. This is a very powerful motive for finding more requirements. For psychological, technical and social reasons, such stubborn asking and questioning comes naturally while preparing tests, but can be judged pushy and somehow illicit in the early stages of requirements elicitation. It is therefore obvious that test analysis and test case design should be done simultaneously with requirements work, not later nor much later, which is still common in IT projects.

Requirements analysis, modelling and breakdown


Requirements elicitation, analysis, modelling and specification are not four distinct process stages, performed sequentially, but rather four small steps, performed many times on the way from some mumbled stakeholder’s comment to the final, well-formulated, validated and accepted full-fledged requirement. Even in sequential development, requirements engineering is iterative. You hear some vague statement, you put it down hastily, go back home, give it a thought, draw a simple picture of what you think it means, go back to the stakeholder, or to another stakeholder, to ask again, many times, until you are really sure. This may include prototyping, or any other form of iterative/agile activity. An important element of this process is formulating your thinking along the following lines: do you really understand what the stakeholder means? How can you check that the implemented system fulfils this requirement?

Controlling requirement’s testability is the single most important criterion for breakdown process: if you do not know how to test it, you need to break it down further.

Requirements verification and consolidation


To verify and validate systems, you test them (dynamic testing). You do not want to test too much, because it costs time and money, but you are not willing to risk testing too little, because it increases the risk of failures in operation. To be able to test as little as possible without risking too many and too serious failures, you need to build systems from good requirements, following a reliable development process.

Whatever the right amount of testing, you do not want many bugs – whether caused by wrong requirements or by implementation mistakes - to make their way to testing, because removing them after testing is more expensive, often much more expensive, then avoiding them or finding them in requirements stage (yes, this is Boehm’s curve).
When building a bridge, you prefer finding bugs in its blueprint, to crashing the whole reinforced concrete construction during load test a week before admitting traffic to it. The same applies to software.

That means, testing requirements is the most effective way of both not needing to test the system build from them too much, and not having too many bugs in it, either.
Professional testers, who have special testing skills, should be involved in requirements verification just as they are involved in dynamic testing. It is today generally accepted that testers help programmers test software, so consequently, it is time to promote the idea that testers should help requirement engineers test requirements.

Designing test cases from requirements (the test cases to be used in the future, during dynamic system test), the design process itself is a very effective way of testing requirements. In other words: designing dynamic tests from requirements is a great way to perform static requirements testing. Increasing the amount of time spent on testing requirements, you decrease the time required for system testing, which is – up to some level – an extremely profitable deal.

Through testing towards new requirements


Whether you work iteratively or sequentially, development projects are just steps in IT product’s lifecycle, which is always iterative and incremental, whether you have chosen this or not. Market research, and operational monitoring and maintenance are main sources of new business ideas and therefore new requirements; testing is the second main source. Testing generates a lot of new ideas and improvement suggestions, which are often wasted, unless you treat testing as requirements elicitation for the next project, and find ways to gather new ideas and use them later.

Requirements engineering is not a project phase, but a never-ending lifecycle activity, and the activities traditionally classified as testing really belong to it.

Bogdan Bereza

Book your course on one of Bogdan's courses!

26. lokakuuta 2017

Liitteet helposti sähköpostiin Microsoft Outlook 2016 -versiossa


Outlookin 2016-versiossa liitteiden lisääminen sähköpostiviestiin on tehty äärimmäisen helpoksi. Nyt Liitä tiedosto -painikkeen alta löytyvät valmiiksi kaikki ne tiedostot, joita olet käsitellyt hetki sitten.
Liitä tiedosto -painike löytyy sähköpostiviestiä laatiessasi sekä Viesti- että Lisää-välilehdiltä.








Kun napsautat Liitä tiedosto -painiketta, saat näkyviin listan viimeisimmistä käyttämistäsi tiedostoista. Listassa näkyvät kaikki tiedostot, riippumatta siitä, mihin ne on tallennettu.
















Jos olet tallentanut tiedoston yhteiseen pilvipalveluun tai vaihtoehtoisesti omalle OneDrivellesi, tarjoaa Outlook mahdollisuutta liittää tiedosto linkkinä tai kopiona. Pääset valitsemaan liittämistavan tiedoston nimeä napsautettuasi.


Näistä toki huomattavasti järkevämpi vaihtoehto on jakaa linkki. Näin vastaanottajilla on aina käytössään viimeisin versio tiedostosta, ja tehdyt muutokset ovat saman tien myös muiden viestin saajien näkyvillä.


(Toinen tarina onkin sitten, onko töissä oikeastaan mitään tiedostoja, jotka kuuluvat OneDriveen jaettavaksi, vai kuuluuko kaikki johonkin yhteiseen paikkaan, vaikkapa SharePointin työtiloihin. Siihen toisella kertaa.)








Anna Sahinoja
Tuoteryhmäpäällikkö, ICT

Anna uskoo vankasti, että kiky muodostuu päivittäisessä työssä myös työkalujen hallinnasta.

25. lokakuuta 2017

Tervetuloa Eficode!

Ilolla ilmoitan, että Tieturin asiakkaat pääsevät jatkossa nauttimaan myös Eficoden asiantuntijoiden osaamisesta. Eficode on suomalainen design- ja teknologiayritys, jonka visiona on rakentaa yli 200 asiantuntijan voimin tulevaisuuden ohjelmistokehitystä ja digitaalisia palveluita. Eficodella on toimistot Helsingissä, Tampereella sekä Kööpenhaminassa, Tukholmassa ja Göteborgissa.

Olen itse erityisen innoissani siitä, että saamme näin vahvaa osaamista mukaan jo ennestään vahvaan asiantuntijajoukkoomme. Etsimme jatkuvasti uusia koulutuskumppaneita taataksemme asiakkaillemme parhaat mahdolliset ajantasaiset koulutukset. Eficoden osaaminen on vakuuttavaa, ja yritys on edelläkävijä erityisesti DevOpsin alueella.

Tieturin tavoitteena on varmistaa Suomen vahva IT-osaaminen myös tulevaisuudessa. Tähän missioomme Eficoden asiantuntemus sopii enemmän kuin hyvin. Teimme aluksi Eficoden kanssa yhteistyötä asiakaskohtaisissa koulutuksissa, ja yhteistyön sujuessa päätimme yhdessä laajentaa sitä. Ihan ensimmäiseksi lisäämme valikoimiimme Eficoden erikoisosaamisalueista DevOps-koulutuksia.

Eficoden asiantuntijoita tullaan näkemään myös ohjelmistokehityspuolellamme, ihan ensimmäisenä ensi viikolla (30.10.2017) järjestettävällä Java ohjelmointi -kurssillamme.

Tervetuloa siis mukaan, Eficode!








Anna Sahinoja
Tuoteryhmäpäällikkö, ICT

24. lokakuuta 2017

Osaatko sittenkään johtaa ketteryyttä?

Ketteryyden suosion jatkuminen ja kasvu on tuonut konsultointi- ja valmennusmarkkinoille uusia tuulia. Perinteistä ja vanhaa on yhdistetty. On ketterää projektipäällikköä, ketterää testaajaa, SAFe agististia ja onpa Prince 2:kin mukana. Kanban ja Devops ovat pohjimmiltaan vanhojen Lean-periaatteiden uusia tuotteistuksia. Johtamisessa on vaikea keksiä enää mitään todella uutta, mutta ketterä johtajuus, palveleva johtaja ja valmentava johtajuus ovat olleet ahkerasti esillä.

Myllerryksen keskellä kannattaa palauttaa ydinasiat jälleen mieleen ja ymmärtää, mikä on todella tärkeää ja mikä markkinointiviestintää.

Organisaatiokulttuuri


”Me olemme vain töissä täällä”-asenteella ei menesty vaativassa aivoja vaativassa työssä. Tarvitaan innostusta ja uskoa oman työn mielekkyyteen. Perinteiset organisaatiot halvaantuvat, koska vallankäyttö tuhoaa aloitteellisuuden. Henkilöstö tunnistaa manipuloinnin helposti eivätkä asiakkaat usko kauniiseen markkinointiviestintään.

Sisäinen motivaatio ei ole ihmisen ominaisuus. Se syntyy organisaatiossa, jossa henkilöstö saa toteuttaa omia tavoitteitaan asiakkaiden kanssa. Hierakkisessa byrokratiassa toiminta näivettyy nopeasti. Henkilö, jolle annamme potkut, voikin muuttua tähdeksi kilpailijan palveluksessa.  

Matala organisaatio


Nykypäivän monitaitoiset tiimit pystyvät hoitamaan asiakkaansa ilman moniportaista ja äärimmilleen erikoistunutta hallintoa. Päätökset menevät oikein, kun ne tehdään lähellä asiakasta. Syntyy säästöjä, kun sisäisen työn määrä pienenee olennaisesti.

Erityisesti tuoteomistajarooli on osoittautunut vaikeaksi. Se on haluttu jakaa useammalle eri sidosryhmää edustavalle henkilölle. Tai sitten, tuoteomistajan lisäksi on erillinen projektipäällikkö. Yleistä on myöskin jakaa työ liiketoimintaa edustavalle tuoteomistajalle ja tekniselle tuoteomistajalle. Puhumattakaan vieläkin monimutkaisemmista hallintohimmeleistä. 

Kokeile ensin


Ketteryyteen siirtymisen vaikeus näkyy myös tavassa suunnitella ja rahoittaa muutoshankkeet isoina projekteina. Visiosta ja tiekartasta tulee helposti projektisuunnitelma, jonka pysymistä aikataulussa ja budjetissa valvotaan perinteiseen tapaan. Suunnitteluun käytetään aivan liikaa aikaa ja se tehdään aivan liian aikaisin. Tarkempi suunnittelu ei poista monimutkaisten hankkeiden luontaista epävarmuutta. 

Ketterä tuotekehitys vaiheistuu suunnilleen seuraavasti:

1. Ensimmäinen kokeilu (proof of concept)
2. Sisäinen versio
3. Alfa-versio
4. Beta-versio
5. Ensimmäinen tuotantoversio
6. Seuraavat tuotantoversiot

Jokaisessa vaiheessa kerätään palautetta, jonka pohjalta tehdään päätös mahdollisesta jatkosta ja sen rahoittamisesta. On myös täysin luonnollista, että työtä ei jatketa. 

Uskalla muuttaa suuntaa


Epäonnistu nopeasti (fail fast) on usein kuultu periaate, jonka toteuttaminen on vaikeaa, mm. koska emme tiedä, mitä onnistuminen on. Meillä ei ole siihen mittareita. Hyvä, kun tiedämme, miksi alun perinkään lähdimme hankkeeseen. 

Uponneet kustannukset ja henkinen sitoutuminen hankkeeseen mutkistavat asioita. Kehitystiimien omat kehitysjonot lisäävät niiden henkistä muutosvastarintaa. Täyttä vauhtia liikkuvan junan suunnanmuutos on tunnetusti hankalaa. Julkaisujunan aikataulun ja julkaisun sisällön muuttaminen on vastaavaan tapaan vaikeaa. Ongelmista yleensä kerrotaan vasta pakottavista syistä ts. kun todellinen tuotanto on alkamassa.

DevOps-maailmassa, jossa julkaisuja on useita päivässä, haasteet ovat toisenlaiset. Sielläkin pitää aika ajoin pysähtyä miettimään koko työn järkevyyttä. 

Pentti Virtanen, Tieturi
FT, Certified Scrum Trainer

Tutustu ainakin näihin koulutuksiin:

Ketteryyttä johdolle >>

Ketterä johtajuus - agile leadership >>

Scrum valmennus uusiutuu

Scrum Allianssi on uusimassa valmennuksiaan. Scrumin perusteet irtoavat Certified ScrumMasterista ja Certified Scrum Product Ownerista ja nämä keskittyvät enemmän ko. roolin tehtäviin ja vastuisiin. 

19. lokakuuta 2017

Koulutusta ajan hermolla, asiakas edellä

Tieturi on jo kunnolla aikuinen yritys, olemme olleet olemassa vuodesta 1983 alkaen. Ihan meitä ei vielä keski-ikäiseksi voi kutsua, mutta kiitettävästi on kertynyt ikää jo yrityksenä! Ensi vuonna juhlimme 35-vuotissyntymäpäiviämme.

Jo alusta lähtien olemme olleet vahvasti mukana tukemassa suomalaisten työelämän taitoja. Matkan varrella kehitys on mennyt huimasti eteenpäin, vuosi vuodelta nopeammin. Itse tulin työelämään Tieturin ollessa 12-vuotias, vuonna 1995. Eli ihan yhtä jalkaa en ole valitettavasti päässyt teknistä kehitystä kokemaan. Omissa muistoissani 1995 oli vielä VAX-aikaa, sähköpostikin kulki VAX-meilinä, eivätkä käyttöliittymät aina olleet WYSIWYGiä nähneet.

Kaikkien näiden vuosien ajan Tieturi ja tieturilaiset ovat kulkeneet ajan hermolla, usein aikaansa edellä. Jotta voimme palvella asiakkaitamme niin kuin haluamme, pidämme oman osaamisemme tiukasti tässä ajassa. Laaja kouluttajaverkostomme mahdollistaa syvän osaamisen kaikilla osa-alueilla ja sen ansiosta pystymme tarjoamaan laajan valikoiman ajantasaisia koulutuksia asiakkaillemme.

Niin valmentajamme ja kuin tuotevalikoimastamme vastaavat tuoteryhmäpäällikkömme seuraavat jatkuvasti, mitä uutta tietotekniikassa tapahtuu. Oli sitten kyseessä ohjelmistokehitys, testaus, toimistotyökalut tai muut työntekoa helpottavat tekniikat, voit olla varma siitä, että uusimmat tuulet uivat koulutussisältöihimme. Juuri nyt seuraamme mm. Microsoftin Skype for Busineksen (pikaviestin ja verkkokokousalusta) siirtymistä Microsoftin Teamsiin (keskusteluihin keskittyvä työtila Office 365:ssä). Ihan vielä se ei ole täällä, mutta kohta on – ja valikoimissamme siksi myös ihan kohta. Samoin Java 9 on jo uinut Java-alueen koulutuksiimme.

Olemme olleet täällä ATK-ajoista lähtien, eläneet mukana muutoksessa ja koko matkamme ajan ennen kaikkea kuunnelleet asiakkaidemme tarpeita. Meidän missiomme on taata Suomen asema tietotekniikan kärkimaana. Siitä pidämme huolen vielä ainakin seuraavat 35 vuotta!

Anna Sahinoja
Tuoteryhmäpäällikkö, Tieturi

18. lokakuuta 2017

Architects beware: 60 years since Dartmouth

Originally posted on Infromator's blog by Milan Kratochvil. See the original here >>

Many R&D-intensive industries experienced an initial period of teething troubles, about six decades between their seminal events and their commercial breakthrough, followed by exponential growth. Last summer, 60 years had passed since the 1956 Dartmouth Artificial Intelligence Conference. 


History...

In 1887, Ernst Mach, a physics professor at the Charles University in Prague, established the principles of supersonics and the Mach number relating velocity to the velocity of sound (thus inspiring his faculty successor Albert Einstein’s theory of relativity). Exactly 60 years after, test pilot Chuck Yeager reached the magical speed of Mach 1, breaking the soundbarrier, with the Bell X-1 rocket plane.

From there, Mach numbers skyrocketed to NASA’s Apollo missions, taking humans to the Moon and back. In aerospace, “the sky is the limit” applied to turnover figures as well.

In 1865, the scientific community (including Charles Darwin) missed the importance of Gregor Mendel’s research in Brno into inheritance in plants, but rediscovered Mendel’s Laws in the 20th century. Mendelian genetics and Darwin's natural selection finally merged in the 1930s, as evolutionary biology. Six decades later (1990-2003) the Human Genome Project HGP, the world's largest collaborative biological project so far, sequenced 92% of the human genome.

Genetics became a fast-grower with applications in diagnostics, forensics, archaeology, and more.

…repeats itself…


In 2016 (well, guess how many years after the Dartmouth AI conference) , the accuracy of Machine Learning (ML) systems started to outperform humans in extreme tasks, previously regarded as “out of reach” for AI. Some recent milestones are the games of Go and Poker , the latter by Mach’s and Einstein’s faculty heirs in Prague, and the University of Alberta Computer Poker Research Group in Edmonton.
AI delivers, which attracts brains and funds into the field. With the usual 60 years of teething in mind, we might call this the end of the beginning. AI departments of large US corporations in a variety of industry sectors are hiring AI experts by hundreds.
Yet the technical progress looks less dramatic when compared to the pace of both corporate and social change it catalyzes. A Forrester prediction last fall said 16% of US jobs will be lost to intelligent systems in the near future, and only partly compensated by 9% new jobs created by them (notably, jobs rather different from those that are vanishing).

…with an impact on architectural roles & landscape:


1. Much more IT(A) in Enterprise Architecture
EA will benefit from a stronger technical background. EA roles, architecture groups, and entire corporations who are used to absorbing new technology and have a strong background in IT including AI, have a competitive edge.

2. More tech leadership in management
That’s what built industries such as Scania, ABB, Volvo AB, and their modular configure-to-order tradition (C2O). The current shift in IT is more manageable in cultures with a clear context and clear ideas of what they need forefront tech for. After decades of custom-tailored complex manufacturing, people in these organization can come up with tangible proposals about leveraging for example, BI and CI (customer insight) downstream: in bidding, sales, pricing, assembly planning, flexible automation solutions, or within the product itself, e.g. in autonomous vehicles.

3. Robotics outcompete offshoring
I argued ten years ago that robots and automation offered a more long-term profitable solution. Everybody continued to rush offshore anyway, although the underlying figures weren’t convincing. Now (guess how many years after the Dartmouth AI conference… ) , AI has triggered a U-turn in corporate sentiment. By 2018, the number of manufacturing jobs moving from Sweden is going to equal the number of jobs moving back. The driving force: robotics and automation.

4. Architecture business as usual…
Architects often work with fancy tech within nearly medieval organizations under nearly stone-age governments. AI 2.0 might therefore feel painstaking. Intelligent robots can result in perpetual reorganizations (process innovators Michael Hammer and B. E. Willoch likened them to reshuffling the deck chairs aboard Titanic), and governments in high-tech countries, socialist and conservative alike, can spend billions on “creating very simple jobs” which is like herding cats: the simpler the jobs, the faster they jump (offshore, as some Swedish trade-union economists point out). Not to mention creating not-so-simple robot taxes that can push offshore the industries of an entire country or continent.
Architects aren’t enthusiastic about the mismatch they had to live with for a long time: a surplus of complexity and information, but a shortage of cognition; in data as well as in society…

5. New flavors of Architecture Patterns
For example, the Layered pattern, typical of business systems (UI, business logic, Object-Relational mapping, and DB) has siblings in deep-learning systems with layers of artificial neural networks trained for a key task each: perception (input parsing), pattern recognition, reasoning (pattern classification and selection of steps to take), and either autonomous action (“vehicle brakes on”, for example) or interaction (e.g. voice generation, or calls to other systems).

6. Ever-bigger data versus custom-fit learning strategies
Accurate fast learning from small data has an architectural savings potential, rarely mentioned in the big-data buzz. Two routes can take you there:

a)  pre-trained neural networks off the shelf (nowadays, you find those even in Matlab) to solve a certain category of problems, and ready to be extra-trained just for the “delta” i.e. the specifics of yours. Largely 90+ percent of the precision, at a fraction of the training time and cost.

b) cross-breeds of several AI techniques, as indicated by Poker systems where an innovative adaptation of a well-proven algorithm made DeepStack run quite fast on a laptop, no longer requiring extreme searches running on supercomputers.

7. Auditability, comprehensibility, V&V, reviews by humans
This category of ML challenges would be worth an entire blogsite. The tradeoff between quality (accuracy of output) and auditability (comprehensibility of machine-made internal logic) grew trickier generation by generation of ML technologies.

To cut a long story short, it’s easier to test that the “sub-symbolic” logic works accurately, than to see why or how.
   

Summing up

Neither Enterprise nor IT Architecture is exempt from AI’s impact on business processes and technology. Machine learning affects systems, organizations, and society, from the way an architect can tweak a plain pattern, and up to the way policymakers can get things plain wrong…

Milan Kratochvil
Trainer, senior modelling and architecture consultant. 

Publications:
UML Extra Light (Cambridge University Press) and Growing Modular (Springer),
Advanced UML2 Professional (OCUP cert level 3/3).

IT-arkkitehtuuri koulutukset >

TOGAF -koulutukset >

11. lokakuuta 2017

Continuous Requirements Engineering part 1/2

This is the first part of a two part blog by Bogdan Bereza.

Test cases complement requirements

Test design as requirements elicitation

Test design makes assumptions that complement requirements elicitation (Lu-Tze)
Every test case adds something to the requirement, specified or assumed, from which it is derived. Yes, you’re right: I do mean that all tests are requirements-based, even those called – very misleadingly - “structure-based”, because even those test cases which check single source-code statements, verify conformance with “what should be”, i.e. the requirements.

Let us ponder a simple example. There is a requirement stating that a given function accepts users aged between 20 and 70. A test analyst, using equivalence partitioning, designs the following test cases: 19, 20, 21, 50, 69, 70, 71, then adds some more tests (how would you call this technique?) with age values -1, 0, “hallo, world” and 20.000.000.000.

The tester actually elicits additional requirements, more detailed than the original requirement, defining correct, expected system behaviour for some special values. Test design techniques are therefore generic requirements elicitation methods. For example, equivalence partitioning states that, wherever you have requirements that define intervals, you can automatically add to them more requirements, defining system behaviour on the boundaries and outside the interval.

Another example: a requirements states that all record field values can be edited and changed. Test analyst creates a number of test cases, attempting various combinations of field changes, using a number of different values. The test cases make the initial requirement more detailed, by eliciting – using common sense, business knowledge or test design techniques – detailed examples of the requirement.

In agile scrum, there is a method, which is a very obvious and conspicuous example that test design is actually the continuation of requirements elicitation under another name: specification by example.

A requirement, specified as a user story, is described more in detail using examples (acceptance scenarios, acceptance criteria), which are added to it, and later used as acceptance test cases. Nice, wise and really good. You do not need to use agile scrum to adopt this method: it suits sequential development equally well. And you save much money avoiding expensive and time-consuming requirements tracing tools, since requirements and test cases are together from the start, stored in the same document or tool. 

All this is not only an academic or intellectual curiosity: this is of prime practical importance. The separation - traditional and still prevalent today - of requirements elicitation and test design procedures, makes no sense, because it artificially separates two similar activities, which for all practical purposes belong together. If they were closely connected in projects, and performed in co-operation, system development would be better: more effective and more efficient.

Test design as requirements modelling

I first learned how to model system behaviour using state diagrams not for requirements engineering, but for testing purposes. I needed a framework to help me understand complex system behaviour better than chaotic, wordy requirements spec written in natural language could do. Besides, having a model was handy for designing test cases, for keeping track of my test coverage, and even for having fresh test ideas: a look at my state graph, or a glimpse of empty cells in my state transition matrix, often put my mind into very exploratory, creative state of mind.

However, making this model was not easy: I spent a lot of time developing it, and some more time making sure it was really right. And, uh, I did find some ambiguities in the initial natural-language description. Making the guys who had written it talk to me was not easy, either. You know, they were VIP: business analysts, rubbing their shoulders with CEO and with CIO, and I was just a humble tester. When I discovered their description was not only ambiguous, but downright wrong here and there, my time investment became greater still.

Whose job was I doing then? A tester who goes into debugging is rightly said to spend her or his time doing developer’s job. A tester analysing requirements documents and making models from them, then improving the initial requirements, spends time doing requirements engineer’s job.

I do not mean to say that testers are diligent and good, while requirements engineers are lazy and bad, because this is definitely not the case, nor the issue either. The issue is, a lot of requirements analysis, modelling and verification work is performed for test design purposes, so – like in the previous section – my conclusion is that the separation of these two activities is very wrong and ineffective. Dealing with the same work twice separately, by different people, and often at different time, is wasteful. We should change it and start working together, requirements engineers and testers.

Default requirements

Testing adds a number of universal default, generic requirements to other requirements. We are often not conscious about them. They should not be written down, because they are too many, and too obvious to justify wasting ink and paper on them. For example, imagine there is a requirement, defining that the system must in certain situations display a rose triangle in the upper left-hand corner of the screen. Why would you test it many times, with different data values, instead of just once or a few times? Because what’s actually being tested is the implicit default requirement, which is “and it must work for all such situations, and never is the system allowed to crash”.

Another such implicit, generic requirement, which complements explicit requirements during test execution, is “and nothing else should happen, unless it is really trivial”. If, besides the required rose triangle, sometimes a little yellow dot appears as well, you may choose to ignore it, but if instead of a little, harmless dot you get (mind, the rose triangle is there as well, as it should!) a 2-minutes long film presentation, you may choose to report an incident.

The practical importance if this is again significant. Pretending that all, really all requirements can and should be written down is futile – and wasteful. Knowing that there are many generic, commonly accepted, implicit requirements, used for testing, that are not written, helps you handle them more effectively.

End of part 1/2

Bogdan Bereza

Book your course on one of Bogdan's courses!

Suositut tekstit