tdd/tdd.rst
changeset 120 7428e411bd7a
parent 117 fab0281a992f
child 131 8888712bed39
equal deleted inserted replaced
119:9f353900cee8 120:7428e411bd7a
   113 transferred to other parts of the program or to other modules from
   113 transferred to other parts of the program or to other modules from
   114 here. This comes as an extremely handy feature especially when we want to
   114 here. This comes as an extremely handy feature especially when we want to
   115 test our modules individually. Now let us run our code as a stand-alone
   115 test our modules individually. Now let us run our code as a stand-alone
   116 script.::
   116 script.::
   117 
   117 
   118   madhu@madhu:~/Desktop$ python gcd.py
   118   $ python gcd.py
   119   Traceback (most recent call last):
   119   Traceback (most recent call last):
   120     File "gcd.py", line 7, in <module> print "Test failed for the case a=48 and b=64. Expected 16. Obtained %d instead." % tc1
   120     File "gcd.py", line 7, in <module> print "Test failed for the case a=48 and b=64. Expected 16. Obtained %d instead." % tc1
   121   TypeError: %d format: a number is required, not NoneType
   121   TypeError: %d format: a number is required, not NoneType
   122 
   122 
   123 Now we have our tests, the test cases and the code unit stub at
   123 Now we have our tests, the test cases and the code unit stub at
   151 things.
   151 things.
   152 
   152 
   153 Now let us run our script which already has the tests written in it
   153 Now let us run our script which already has the tests written in it
   154 and see what happens::
   154 and see what happens::
   155 
   155 
   156   madhu@madhu:/media/python/sttp/tdd$ python gcd.py
   156   $ python gcd.py
   157   All tests passed!
   157   All tests passed!
   158 
   158 
   159 Success! We managed to pass all the tests. But wasn't that code simple
   159 Success! We managed to pass all the tests. But wasn't that code simple
   160 enough? Indeed it was. If you take a closer look at the code you will
   160 enough? Indeed it was. If you take a closer look at the code you will
   161 soon realize that the chain of subtraction operations can be replaced
   161 soon realize that the chain of subtraction operations can be replaced
   381 in the sample sessions. It complains if the results don't match as
   381 in the sample sessions. It complains if the results don't match as
   382 documented. When we execute this script as a stand-alone script we
   382 documented. When we execute this script as a stand-alone script we
   383 will get back the prompt with no messages which means all the tests
   383 will get back the prompt with no messages which means all the tests
   384 passed::
   384 passed::
   385 
   385 
   386   madhu@madhu:~$ python gcd.py
   386   $ python gcd.py
   387   madhu@madhu:~$ 
   387   $ 
   388 
   388 
   389 If we further want to get a more detailed report of the tests that
   389 If we further want to get a more detailed report of the tests that
   390 were executed we can run python with -v as the command line option
   390 were executed we can run python with -v as the command line option
   391 to the script::
   391 to the script::
   392 
   392 
   393   madhu@madhu:~$ python gcd.py -v
   393   $ python gcd.py -v
   394   Trying:
   394   Trying:
   395       gcd(48, 64)
   395       gcd(48, 64)
   396   Expecting:
   396   Expecting:
   397     16
   397     16
   398   ok
   398   ok
   439           return a
   439           return a
   440       return gcd(b, a%b)
   440       return gcd(b, a%b)
   441 
   441 
   442 Executing this code snippet without -v option to the script::
   442 Executing this code snippet without -v option to the script::
   443 
   443 
   444   madhu@madhu:~$ python gcd.py
   444   $ python gcd.py
   445   **********************************************************************
   445   **********************************************************************
   446   File "gcd.py", line 11, in __main__.gcd
   446   File "gcd.py", line 11, in __main__.gcd
   447   Failed example:
   447   Failed example:
   448       gcd(48, 64)
   448       gcd(48, 64)
   449   Expected:
   449   Expected:
   523           del self.test_cases
   523           del self.test_cases
   524 
   524 
   525   if __name__ == '__main__':
   525   if __name__ == '__main__':
   526       unittest.main()
   526       unittest.main()
   527 
   527 
   528 
   528 Since we don't want to read this file into memory each time we run a
   529 
       
   530  Since we don't want to read this file into memory each time we run a
       
   531 separate test method, we will read all the data in the file into
   529 separate test method, we will read all the data in the file into
   532 Python lists in the setUp function and in the tearDown function of the
   530 Python lists in the setUp method. The entire data file is kept in a
   533 
   531 list called test_cases which happens to be an attribute of the
   534 
   532 TestGCDFunction class. In the tearDown method of the class we
   535 
   533 will delete this attribute to free up the memory and close the
   536 
   534 opened file.
   537 To further explain the idea, the idea of placing tests with in the
   535 
   538 Python scripts and to execute them when the script is run as a
   536 Our actual test code sits in the method which begins with the name
   539 stand-alone script works well as long as we have our code in a single
   537 **test_** as said earlier, the test_gcd method. Note that we import
   540 Python file or as long as the tests for each script can be run
   538 the gcd Python module we have written at the top of this test file and
   541 separately. But in a more realistic software development scenario,
   539 from this test method we call the gcd function within the gcd module
   542 often this is not the case. The code is spread around multiple Python
   540 to be tested with the each set of **a** and **b** values in the
   543 scripts, each script, also called as a Python module, and may be even
   541 attribute test_cases. Once we execute the function we obtain the
   544 across several Python packages.
   542 result and compare it with the expected result as stored in the
   545 
   543 corresponding test_cases attribute using the assertEqual methods
   546 In such a scenario what we would like to do is to create a separate
   544 provided by our parent class TestCase in the unittest framework. There
   547 directory for holding these test. The structure of this directory is
   545 are several other assertion methods supplied by the unittest
   548 the exact replica of the Python package hierarchy of our software to
   546 framework. For a more detailed information about this, refer to the
   549 be tested. This structure is especially useful because of the fact
   547 unittest library reference at [1].
   550 that we have a one to one correspondence to our code and to its test.
   548 
   551 Hence it is easy for us to navigate through the tests as we maintain
   549 nose
   552 the existing tests and to add new tests as the code evolves. We have a
   550 ====
   553 collection of tests in the specified structure. Any collection of
   551 
   554 tests is called as the test suite for the *software package*. Hence we
   552 Now we know almost all the varities of tests we may have to use to
   555 shall call this directory of tests as our test suite.
   553 write self-sustained, automated tests for our code. There is one last
   556 
   554 thing that is left. However one question remains, how do we easily
   557 Fine we have all these, but how do we make our tests aware that they
   555 organize choose and run the tests that is scattered around several
   558 are the tests for such and such a Python module or code and when
   556 files? 
   559 executed must test that corresponding code? To make the lives of
   557 
   560 Python developers and testers easy Python provides a very handy tool
   558 To further explain, the idea of placing tests with in the Python
   561 called as **nose**. The name should have been pretty evident from the
   559 scripts and executing that test scripts themselves as stand-alone
   562 heading of this section! So in the rest of this module let us discuss
   560 scripts works well as long as we have our code in a single Python file
   563 how to use **nose** to write, maintain and extend our tests as the
   561 or as long as the tests for each script can be run separately. But in
   564 code evolves.
   562 a more realistic software development scenario, often this is not the
   565 
   563 case. The code is spread around multiple Python modules and may be
   566 Running at the **nose**
   564 even across several Python packages.
   567 =======================
   565 
       
   566 In such a such a scenario we wish we had a better tool to
       
   567 automatically aggregate these tests and execute them. Fortunately for
       
   568 us there exists a tool called nose. Although nose is not part of the
       
   569 standard Python distribution itself, it can be very easily installed
       
   570 by using easy_install command as follows::
       
   571 
       
   572   $ easy_install nose
       
   573 
       
   574 Or download the nose package from [2], extracting the archive and
       
   575 running the command from the extracted directory::
       
   576 
       
   577   $ python setup.py install
       
   578 
       
   579 Now we have nose up and running, but how do we use it? It is very
       
   580 straight forward as well. We will use the command provided by nose
       
   581 called as nosetests. Run the following command in the top level
       
   582 directory of your code::
       
   583 
       
   584   $ nosetests
       
   585 
       
   586 Thats all, nose automatically picks all the tests in all the
       
   587 directories and subdirectories in our code base and executes them
       
   588 all. However if we want to execute specific tests we can pass the test
       
   589 file names or the directories as arguments to nosetests command. For a
       
   590 detailed explanation about this, refer to [3]
       
   591 
       
   592 Conclusion
       
   593 ==========
       
   594 
       
   595 Now we have all the trappings we want to write state-of-the art
       
   596 tests. To emphasize the same point again, any code which was written
       
   597 before writing the test and the testcases in hand is flawed by
       
   598 design. So it is recommended to follow the three step approach while
       
   599 writing code for any project as below:
       
   600 
       
   601   1. Write failing tests with testcases in hand.
       
   602   2. Write the code to pass the tests.
       
   603   3. Refactor the code for better performance.
       
   604 
       
   605 This approach is very famously known to the software development world
       
   606 as "Red-Green-Refactor" approach[4].
   568 
   607 
   569 
   608 
   570 [0] - http://docs.python.org/library/doctest.html
   609 [0] - http://docs.python.org/library/doctest.html
       
   610 [1] - http://docs.python.org/library/unittest.html
       
   611 [2] - http://pypi.python.org/pypi/nose/
       
   612 [3] - http://somethingaboutorange.com/mrl/projects/nose/0.11.2/usage.html
       
   613 [4] - http://en.wikipedia.org/wiki/Test-driven_development