I am using JenkinsAPI
to trigger parametrized jobs. I am aware of the REST API that Jenkins use, but our setup does not allow that directly; so the main mean for me to trigger jobs is through this library.
So far I have no problems to find jobs on my server or trigger jobs, but I am facing 2 problems
1) When I trigger a Job, I have no clue about its outcome. I assumed the output of the job would be returned when I run the build_job
function but it is not the case. I need to know if that job did pass or fail, and I can't find a way to get this info, since I can't even retrieve the job number when I trigger it.
2) I get an error when the job run, although the job does pass without issues:
raise ValueError("Not a Queue URL: %s" % redirect_url)
I did read a bit and seems that Jenkins is switching between http and https url, which confuse the library. If I understand correctly, it was deemed a Jenkins issue and as such, not fixed on the JenkinsAPI
side.
This is the code so far, it does connect to my jenkins server, does retrieve the list of jobs, does trigger a job, but does not allow me to know if the job pass or fail, and I get the error mentioned earlier.
Any way to get this to work so I can get the pass/fail outcome of the job I triggered?
jenkins_url = 'http://myjenkins_host:8080'
# Create server
server = Jenkins(jenkins_url, username='user', password='123456789abcdef')
# Check job and print description
for job_name, job_instance in server.get_jobs():if job_name == "testjob":print('Job Name:%s' % job_instance.name)print('Job Description:%s' % (job_instance.get_description()))# Trigger job
params = {'a':1, 'b':2, 'c': True}
server.build_job("testjob", params)
# HOW do I get the result of this job???
I'm not a big fan of Jenkins Python API and to be honest, I even didn't use it once. I personally prefer to use raw JSON API with Python, it suits me better (that's why my example will use JSON API instead, but in the end, the goal is still achieved via python script).
Now answering your question you could track job status and result by querying it via API every now and then. But first things first.
1.Prerequisites
Python 2.7 or 3.x and python requests library installed:
pip install requests
For python 3.x
pip3 install requests
Also: How to install pip
2.Python script to trigger and track result
import requests
import timejenkins_url = "http://localhost:8080"
auth = ("USERNAME", "PASSWORD")
job_name = "Dummy"
request_url = "{0:s}/job/{1:s}/buildWithParameters".format(jenkins_url,job_name,
)print("Determining next build number")
job = requests.get("{0:s}/job/{1:s}/api/json".format(jenkins_url,job_name,),auth=auth,
).json()
next_build_number = job['nextBuildNumber']
next_build_url = "{0:s}/job/{1:s}/{2:d}/api/json".format(jenkins_url,job_name,next_build_number,
)params = {"Foo": "String param 1", "Bar": "String param 2"}
print("Triggering build: {0:s} #{1:d}".format(job_name, next_build_number))
response = requests.post(request_url, data=params, auth=auth)response.raise_for_status()
print("Job triggered successfully")while True:print("Querying Job current status...")try:build_data = requests.get(next_build_url, auth=auth).json()except ValueError:print("No data, build still in queue")print("Sleep for 20 sec")time.sleep(20)continueprint("Building: {0}".format(build_data['building']))building = build_data['building']if building is False:breakelse:print("Sleep for 60 sec")time.sleep(60)print("Job finished with status: {0:s}".format(build_data['result']))
Above script works both with python 2.7 and 3.x. Now a little explanation:
In the beginning, we want to resolve what number future build will have in order to query it later on. After that build is being triggered, and the response is checked for errors. A 4XX client error or 5XX server error response will raise an exception: requests.exceptions.HTTPError
. And the final step is just querying triggered build for its status as long as it's not finished. But please note that triggered builds can be in a queue for some time, hence try: except:
block in code. Of course, you can adjust time.sleep()
to suit your needs.
Example output:
$ python dummy.py
Determining next build number
Triggering build: Dummy #55
Job triggered successfully
Querying Job current status...
No data, build still in queue
Sleep for 20 sec
Querying Job current status...
Building: True
Sleep for 60 sec
Querying Job current status...
Building: True
Sleep for 60 sec
Querying Job current status...
Building: False
Job finished with status: SUCCESS
!PLEASE NOTE!
Depending on your Jenkins version and security settings you can have following error:
requests.exceptions.HTTPError: 403 Client Error: No valid crumb was included in the request for url: ...
Jenkins by default has CSRF Protection enabled which prevents one-click attacks.
To solve this you can either:
- Disable Prevent Cross Site Request Forgery exploits checkbox in Jenkins Configure Global Security (Not recommended)
- Obtain the crumb from
/crumbIssuer/api/xml
using your credentials and include it into your request headers.
Above script will need only minor modifications to use jenkins crumb:
crumb_data = requests.get("{0:s}/crumbIssuer/api/json".format(jenkins_url),auth=auth,
).json()
headers = {'Jenkins-Crumb': crumb_data['crumb']}
And pass those headers to request which is triggering a new build like so:
print("Triggering build: {0:s} #{1:d}".format(job_name, next_build_number))
response = requests.post(request_url,data=params,auth=auth,headers=headers,
)