How to correctly create Python feature branch releases in development? (pip and PEP-440)

2024/10/6 1:36:47

I develop a Python library using Gitflow development principle and have a CI stage for unit testing and package upload to a (private) PyPI. I want to consume the uploaded package for testing purposes before I merge it back to the integration branch.

Other package managers (and popular tools) allow for version identifiers that contain feature branch specific identifiers, such as 1.2.3-my-feature-alpha.1 in compliance with SemVer. However, PEP-440 forbids the use of such versioning schemes and twine even rejects such uploads.

What is the correct (or a workable) approach to name Python package versions if wanting to create such pre-releases (that potentially can happen in parallel with multiple feature branches) without version identifier conflicts in compliance with PEP-440?

Answer

i am too thinking about this.

Although not intended for this use, you could misuse the local version identifier noted in the PEP-440, see: https://www.python.org/dev/peps/pep-0440/#toc-entry-5

in your case it would be 1.2.3+my-feature.alpha.1

releases with local version identifiers are ignored when querying releases with a compatible public version identifier like 1.2.3, but could still be queried when directly selecting a release with a local version identifier 1.2.3+my-feature-alpha.1

my comment is not a recommendation, it just mirrors the thoughts i had over the same problem.

https://en.xdnf.cn/q/70416.html

Related Q&A

How do I replace NA with NaN in a Pandas DataFrame?

Some columns in my DataFrame have instances of <NA> which are of type pandas._libs.missing.NAType. Id like to replace them with NaN using np.nan. I have seen questions where the instances of <…

concatenation of two or more base64 strings in python

Im tring to concatenate two strings encoded to base64 but it doesnt really work, just prints the first string in concatanation:q = base64.b64encode("StringA") print q # prints an encoded stri…

How to find shared library used by a python module?

I am debugging a python program based on pygtk and I want to make sure that the program is using the right shared library.pygtk is a GTK+ wrapper for python. I have already compiled GTK+ using jhbuild …

Python groupby doesnt work as expected [duplicate]

This question already has answers here:itertools.groupby() not grouping correctly(3 answers)Closed 5 years ago.I am trying to read an excel spreadsheet that contains some columns in following format:co…

Dask: create strictly increasing index

As is well documented, Dask creates a strictly increasing index on a per partition basis when reset_index is called, resulting in duplicate indices over the whole set. What is the best way (e.g. comput…

Installing hunspell package

Im looking forward to install the hunspell package using pip, but it throws the following error:Collecting hunspellUsing cached hunspell-0.4.1.tar.gz Building wheels for collected packages: hunspellRun…

Flask-Restful taking over exception handling from Flask during non debug mode

Ive used Flasks exception handling during development (@app.errorhander(MyException)) which worked fine even for exceptions coming from Flask-Restful endpoints.However, I noticed that when switching to…

Fetching data with snowflake connector throws EmptyPyArrowIterator error

I use python snowflake connector in my python script (plotly dash app) and today the app stopped working without me changing the code. I tried a couple of things to find out what might be the issue and…

What does epochs mean in Doc2Vec and train when I have to manually run the iteration?

I am trying to understand the epochs parameter in the Doc2Vec function and epochs parameter in the train function. In the following code snippet, I manually set up a loop of 4000 iterations. Is it requ…

TensorFlow 2.0 How to get trainable variables from tf.keras.layers layers, like Conv2D or Dense

I have been trying to get the trainable variables from my layers and cant figure out a way to make it work. So here is what I have tried:I have tried accessing the kernel and bias attribute of the Dens…