I have a table issue_logs
:
id | issue_id | from_status | to_status | up_date | remarks
----+----------+-------------+-----------+----------------------------------+-----------29 | 20 | 10 | 11 | 2018-09-14 11:43:13.907052+05:30 | UPDATED28 | 20 | 9 | 10 | 2018-09-14 11:42:59.612728+05:30 | UPDATED27 | 20 | | 9 | 2018-09-11 17:45:35.13891+05:30 | NEW issue26 | 19 | 9 | 11 | 2018-09-06 16:37:05.935588+05:30 | UPDATED25 | 19 | | 9 | 2018-09-06 16:27:40.543001+05:30 | NEW issue24 | 18 | 11 | 10 | 2018-09-05 17:13:37.568762+05:30 | UPDATED
and rt_status
:
id | description | duration_in_min
----+------------------+-----------------1 | new | 12 | working | 13 | approval pending | 14 | resolved | 15 | initial check | 16 | parts purchase | 17 | shipment | 18 | close | 19 | initial check | 110 | parts purchase | 111 | shipment | 112 | close | 1
For a date range from_datetime = '2018-09-06T16:34'
to to_datetime = '2018-09-14T12:27'
I want to select all the issues that have exceeded the duration_of_time
set for each status value defined in the rt_status
table. I should get from issue logs the records with ids 29, 27, and 26. The records with ids 29, and 26 should consider the time elapsed between their last up_date
and to_datetime
.
I would like to use the func.lag
and over
to do it, but I'm unable to get the correct records. I am using Postgresql 9.6, and Python 2.7. How exactly can I get the func.lag
or func.lead
to work using SQLAlchemy Core only?
What I tried:
s = select([rt_issues.c.id.label('rtissue_id'),rt_issues,rt_status.c.duration_in_min,rt_status.c.id.label('stage_id'),issue_status_logs.c.id.label('issue_log_id'),issue_status_logs.c.up_date.label('iss_log_update'),(issue_status_logs.c.up_date - func.lag(issue_status_logs.c.up_date).over(issue_status_logs.c.issue_id)).label('mdiff'),]).\where(and_(*conditions)).\select_from(rt_issues.outerjoin(issue_status_logs,rt_issues.c.id == issue_status_logs.c.issue_id).outerjoin(rt_status,issue_status_logs.c.to_status == rt_status.c.id)).\order_by(asc(issue_status_logs.c.up_date),issue_status_logs.c.issue_id).\group_by(issue_status_logs.c.issue_id,rt_issues.c.id,issue_status_logs.c.id)rs = g.conn.execute(s)mcnt = rs.rowcountprint mcnt, 'rowcont'if rs.rowcount > 0:for r in rs:print dict(r)
This yields results that include wrong records, i.e. issue log with id 28. Can anyone help with rectifying the error?