jenkinsapi python - how to trigger and track the job result

2024/9/20 15:12:47

I am using JenkinsAPI to trigger parametrized jobs. I am aware of the REST API that Jenkins use, but our setup does not allow that directly; so the main mean for me to trigger jobs is through this library.

So far I have no problems to find jobs on my server or trigger jobs, but I am facing 2 problems

1) When I trigger a Job, I have no clue about its outcome. I assumed the output of the job would be returned when I run the build_job function but it is not the case. I need to know if that job did pass or fail, and I can't find a way to get this info, since I can't even retrieve the job number when I trigger it.

2) I get an error when the job run, although the job does pass without issues:

raise ValueError("Not a Queue URL: %s" % redirect_url)

I did read a bit and seems that Jenkins is switching between http and https url, which confuse the library. If I understand correctly, it was deemed a Jenkins issue and as such, not fixed on the JenkinsAPI side.

This is the code so far, it does connect to my jenkins server, does retrieve the list of jobs, does trigger a job, but does not allow me to know if the job pass or fail, and I get the error mentioned earlier.

Any way to get this to work so I can get the pass/fail outcome of the job I triggered?

jenkins_url = 'http://myjenkins_host:8080'   
# Create server
server = Jenkins(jenkins_url, username='user', password='123456789abcdef')
# Check job and print description
for job_name, job_instance in server.get_jobs():if job_name == "testjob":print('Job Name:%s' % job_instance.name)print('Job Description:%s' % (job_instance.get_description()))# Trigger job
params = {'a':1, 'b':2, 'c': True}
server.build_job("testjob", params)
# HOW do I get the result of this job???
Answer

I'm not a big fan of Jenkins Python API and to be honest, I even didn't use it once. I personally prefer to use raw JSON API with Python, it suits me better (that's why my example will use JSON API instead, but in the end, the goal is still achieved via python script).

Now answering your question you could track job status and result by querying it via API every now and then. But first things first.

1.Prerequisites

Python 2.7 or 3.x and python requests library installed:

pip install requests

For python 3.x

pip3 install requests

Also: How to install pip

2.Python script to trigger and track result

import requests
import timejenkins_url = "http://localhost:8080"
auth = ("USERNAME", "PASSWORD")
job_name = "Dummy"
request_url = "{0:s}/job/{1:s}/buildWithParameters".format(jenkins_url,job_name,
)print("Determining next build number")
job = requests.get("{0:s}/job/{1:s}/api/json".format(jenkins_url,job_name,),auth=auth,
).json()
next_build_number = job['nextBuildNumber']
next_build_url = "{0:s}/job/{1:s}/{2:d}/api/json".format(jenkins_url,job_name,next_build_number,
)params = {"Foo": "String param 1", "Bar": "String param 2"}
print("Triggering build: {0:s} #{1:d}".format(job_name, next_build_number))
response = requests.post(request_url, data=params, auth=auth)response.raise_for_status()
print("Job triggered successfully")while True:print("Querying Job current status...")try:build_data = requests.get(next_build_url, auth=auth).json()except ValueError:print("No data, build still in queue")print("Sleep for 20 sec")time.sleep(20)continueprint("Building: {0}".format(build_data['building']))building = build_data['building']if building is False:breakelse:print("Sleep for 60 sec")time.sleep(60)print("Job finished with status: {0:s}".format(build_data['result']))

Above script works both with python 2.7 and 3.x. Now a little explanation:

In the beginning, we want to resolve what number future build will have in order to query it later on. After that build is being triggered, and the response is checked for errors. A 4XX client error or 5XX server error response will raise an exception: requests.exceptions.HTTPError. And the final step is just querying triggered build for its status as long as it's not finished. But please note that triggered builds can be in a queue for some time, hence try: except: block in code. Of course, you can adjust time.sleep() to suit your needs.

Example output:

$ python dummy.py 
Determining next build number
Triggering build: Dummy #55
Job triggered successfully
Querying Job current status...
No data, build still in queue
Sleep for 20 sec
Querying Job current status...
Building: True
Sleep for 60 sec
Querying Job current status...
Building: True
Sleep for 60 sec
Querying Job current status...
Building: False
Job finished with status: SUCCESS

!PLEASE NOTE!

Depending on your Jenkins version and security settings you can have following error:

requests.exceptions.HTTPError: 403 Client Error: No valid crumb was included in the request for url: ...

Jenkins by default has CSRF Protection enabled which prevents one-click attacks.

To solve this you can either:

  1. Disable Prevent Cross Site Request Forgery exploits checkbox in Jenkins Configure Global Security (Not recommended)
  2. Obtain the crumb from /crumbIssuer/api/xml using your credentials and include it into your request headers.

Above script will need only minor modifications to use jenkins crumb:

crumb_data = requests.get("{0:s}/crumbIssuer/api/json".format(jenkins_url),auth=auth,
).json()
headers = {'Jenkins-Crumb': crumb_data['crumb']}

And pass those headers to request which is triggering a new build like so:

print("Triggering build: {0:s} #{1:d}".format(job_name, next_build_number))
response = requests.post(request_url,data=params,auth=auth,headers=headers,
)
https://en.xdnf.cn/q/72263.html

Related Q&A

Django test parallel AppRegistryNotReady

I am trying to understand how to run django tests in parallel with in memory sqlite3.I have django app with that structure:gbookorder...tests__init__.pytest_a1.pytest_b1.pyutils.pytest_a1.py and test_b…

ImportError: PyCapsule_Import could not import module pyexpat

I am using Jenkins to build a python (Flask) solution to deploy to Google App Engine. As part of the build process I run a few integration tests. One of them is failing with the following error. ERROR:…

Python - Get max value in a list of dict

I have a dataset with this structure :In[17]: allIndices Out[17]: [{0: 0, 1: 1.4589, 4: 2.4879}, {0: 1.4589, 1: 0, 2: 2.1547}, {1: 2.1547, 2: 0, 3: 4.2114}, {2: 4.2114, 3: 0}, {0: 2.4879, 4: 0}]Id lik…

Rescaling axis in Matplotlib imshow under unique function call

I have written a function module that takes the argument of two variables. To plot, I hadx, y = pylab.ogrid[0.3:0.9:0.1, 0.:3.5:.5] z = np.zeros(shape=(np.shape(x)[0], np.shape(y)[1]))for i in range(le…

f2py array valued functions

Do recent versions of f2py support wrapping array-valued fortran functions? In some ancient documentation this wasnt supported. How about it now?Lets for example save the following function as func.f…

Unique strings in a pandas dataframe

I have following sample DataFrame d consisting of two columns col1 and col2. I would like to find the list of unique names for the whole DataFrame d. d = {col1:[Pat, Joseph, Tony, Hoffman, Miriam, Good…

finding index of multiple items in a list

I have a list myList = ["what is your name", "Hi, how are you","What about you", "How about a coffee", "How are you"]Now I want to search index of all …

Debugging asyncio code in PyCharm causes absolutely crazy unrepeatable errors

In my project that based on asyncio and asyncio tcp connections that debugs with PyCharm debugger I got very and very very absurd errors.If I put breakpoint on code after running, the breakpoint never …

how to generate pie chart using dict_values in Python 3.4?

I wanted the frequency of numbers in a list which I got using Counter library. Also, I got the keys and values using keys = Counter(list).keys() and values = Counter(list).values() respectively, where …

How can I make start_url in scrapy to consume from a message queue?

I am building a scrapy project in which I have multiple spiders( A spider for each domain). Now, the urls to be scraped come dynamically from a user given query. so basically I do not need to do broad…