I'm developing one test bench which runs multiple tests via python gui and prints the output as below.
A Passed
B Passed
C Passed
D Passed
E Passed
Button from gui should be changed to 'Passed' only when A,B,C,D,E all are Passed. If any of these tests fails, it should say failed. What is the way to access this output from gui which is printed on screen.
My code for tests is:
from PyQt4.QtCore import *
from PyQt4.QtGui import *
import sys, os, time
from PyQt4 import QtGui, QtCore
from progress.bar import Bar
import datetime
import threadclass MyTestBench(QDialog, QtGui.QWidget):def __init__(self):super(QDialog, self).__init__()self.setWindowTitle("Implementation")self.progressbar = QtGui.QProgressBar()self.progressbar.setMinimum(0)self.progressbar.setMaximum(100)self.run_test_button = QtGui.QPushButton('Run Your Tests')self.run_test_button.clicked.connect(self.run_test_event)def run_test_event(self):thread.start_new_thread(self.run_the_test, ("Thread-1", 0.5))thread.start_new_thread(self.run_the_progress, ("Thread-2", 0.5))def run_the_test(self, tname, delay): os.system("python nxptest.py my_testlist.txt")self.progressbar.setValue(100)if self.progressbar.value() == self.progressbar.maximum(): time.sleep(3)self.run_test_button.setText('Run Your Tests')def run_the_progress(self, tname, delay):count = 0while count < 5:self.run_test_button.setText('Running.')time.sleep(0.5)self.run_test_button.setText('Running..')time.sleep(0.5)self.run_test_button.setText('Running...')value = self.progressbar.value() + 10self.progressbar.setValue(value)time.sleep(0.5)if self.progressbar.value() == self.progressbar.maximum():self.progressbar.reset()count = count + 1app = QApplication(sys.argv)
dialog = MyTestBench()
dialog.setGeometry(100, 100, 200, 50)
dialog.show()
app.exec_()
The main challenge I'm facing here is I'm new to gui programming and I don't know how to access the output that is printed on screen.
If you're trying to get the text output of a program, you can't run that program using os.system
. As the docs for that function say:
The subprocess
module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using this function. See the Replacing Older Functions with the subprocess
Module section in the subprocess
documentation for some helpful recipes.
If you follow those links, they'll show how to do what you want. But basically, it's something like this:
output = subprocess.check_output(["python", "nxptest.py", "my_testlist.txt"])
If you're using 2.6 or earlier, you won't have check_output
; you can read the docs to see how to build it yourself on top of, e.g., communicate
, or you can just install the subprocess32
backport from PyPI and use that.
From a comment:
This works but my only concern is there are lot of results for the tests which are printed before it actually prints A Passed B Passed etc.. Im looking for a way to get just this part of string and not the whole output.
That isn't possible. How could your program have any idea which part of the output is "this part of the string" and which part is "a lot of results … which are printed before"?
If you can edit the programs being tested in some way—e.g., make them print their "real" output to stdout, but their "extra" output to stderr, or provide a command-line argument that makes them skip all the extra stuff—that's great. But assuming you can't, there is no alternative but to filter the results.
But this doesn't look very hard. If each line of "real" output is either "X Passed"
or "X Failed"
and nothing else starts with "X "
(where X
is any uppercase ASCII letter), that's just:
test_results = {}
for line in output.splitlines():if line[0] in string.ascii_uppercase and line[1] == ' ':test_results[line[0]] = line[2:]
Now, at the end, you've got:
{'A': 'Passed', 'B': 'Passed', 'C': 'Passed', 'D': 'Passed', 'E': 'Passed'}
If you want to verify that all of A-E were covered and they all passed:
passed = (set(test_results) == set('ABCDE') andall(value == 'Passed' for value in test_results.values()))
Of course you could build something nicer that shows which ones were skipped or didn't pass or whatever. But honestly, if you want something more powerful, you should probably be using an existing unit testing framework instead of building one from scratch anyway.